186 24 57MB
English Pages 339 Year 2010
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
ADVANCES IN ENERGY RESEARCH
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
ADVANCES IN ENERGY RESEARCH. VOLUME 1
No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in rendering legal, medical or any other professional services.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
ADVANCES IN ENERGY RESEARCH Additional books in this series can be found on Nova’s website under the Series tab.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Additional E-books in this series can be found on Nova’s website under the E-book tab.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
ADVANCES IN ENERGY RESEARCH
ADVANCES IN ENERGY RESEARCH. VOLUME 1
MORENA J. ACOSTA
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
EDITOR
Nova Science Publishers, Inc. New York
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2011 by Nova Science Publishers, Inc. All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying, recording or otherwise without the written permission of the Publisher. For permission to use material from this book please contact us: Telephone 631-231-7269; Fax 631-231-8175 Web Site: http://www.novapublishers.com NOTICE TO THE READER The Publisher has taken reasonable care in the preparation of this book, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained in this book. The Publisher shall not be liable for any special, consequential, or exemplary damages resulting, in whole or in part, from the readers’ use of, or reliance upon, this material. Any parts of this book based on government reports are so indicated and copyright is claimed for those parts to the extent applicable to compilations of such works.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Independent verification should be sought for any data, advice or recommendations contained in this book. In addition, no responsibility is assumed by the publisher for any injury and/or damage to persons or property arising from any methods, products, instructions, ideas or otherwise contained in this publication. This publication is designed to provide accurate and authoritative information with regard to the subject matter covered herein. It is sold with the clear understanding that the Publisher is not engaged in rendering legal or any other professional services. If legal or any other expert assistance is required, the services of a competent person should be sought. FROM A DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS. Additional color graphics may be available in th3 3-book version of this book. LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA ISSN 2157-1562 ISBN 978-1-61668-994-0
Published by Nova Science Publishers, Inc. New York
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
CONTENTS
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Preface
vii
Chapter 1
Tropical Cyclone-Ocean Interaction: Numerical Studies Akiyoshi Wada
Chapter 2
The Future of Energy: The Global Challenge Mustafa Omer
69
Chapter 3
Tropical Cyclone-Ocean Interaction: Climatology Akiyoshi Wada
99
Chapter 4
Sunlight and Skylight Availability Stanislav Darula and Richard Kittler
133
Chapter 5
The Inevitability of Continuing Global Anthropogenic Environmental Degradation G.P. Glasby
183
Chapter 6
General Overview for Worldwide Trend of Fossil Fuels Erkan Topal and Shahriar Shafiee
203
Chapter 7
Scenario Discovery and Temporal Analysis for Energy Consumption Forecasting of the Brazilian Amazon Power Suppliers A. Cláudio Rocha, L. Ádamo de Santana, A.B. Guilherme Conde, R. Carlos Francês and L. Nandamudi Vijaykumar
225
Chapter 8
Efficient Low Power Scheduling for Heterogeneous Dual-Core Embedded Real-Time Systems Pochun Lin and Kuochen Wang
241
Chapter 9
Work-Energy Approach to the Formulation of Expression of Wind Power Reccab M. Ochieng and Frederick N. Onyango
259
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
1
vi Chapter 10
Contents Estimating Energy Consumption and Execution Time of Embedded System Applications Gustavo Callou, Paulo Maciel, Ermeson Andrade, Bruno Nogueira, Eduardo Tavares and Carlos Araujo
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Index
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
267
315
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
PREFACE This new book presents a comprehensive review of renewable energy sources, the environment and sustainable development. This includes all the renewable energy technologies, materials and their development, energy efficiency systems, energy conservation scenarios, energy savings and other mitigation measures necessary to reduce climate change. Also explored in this volume is ocean energy, its benefits and barriers and solar radiation as a primary daytime source of energy. Various scenarios are extrapolated in order to assess the potential environmental impact of increasing world population and consumption throughout the 21st century and a general overview of the worldwide trend in fossil fuels. Tropical cyclones (TCs) are among the most feared and deadly weather systems on Earth, and cause tremendous damage and loss of life due to strong winds, torrential rainfall, and storm surges. Such catastrophic natural disasters seriously affect socioeconomic activity and should be mitigated. An energy source of TCs is moisture provided from the ocean. In Chapter 1 the authors describe how TCs interact with the ocean using results of numerical experiments, simulations, and predictions by an ocean general circulation model and nonhydrostatic atmosphere-wave-ocean coupled model from meteorological and oceanographic points of view. A representative feature of the ocean response to a TC is TCinduced sea-surface cooling (SSC). SSC is mainly caused by vertical turbulent mixing and Ekman pumping, which depend on the traveling speed of a TC. Vertical turbulent mixing is a one-dimensional process for efficiently entraining cool water in the seasonal thermocline into the mixed layer. Shear-induced turbulent kinetic energy (TKE), TKE generation due to wave breaking, and energy dissipation play crucial roles in determining the entrainment rate. In contrast, Ekman pumping is a three-dimensional process and occurs beneath and behind a TC. It helps raise the seasonal thermocline and transport cool water upward, increasing oceanic primary production. SSC leads to suppression of TC intensification, resulting in TC weakening and structural change during the intensification phase when a few kilometer-scale mesovortices formed within a 100km-scale cyclonic circulation are suppressed. In contrast, the negative impact of SSC on TC intensity and its structural change is not significant during the mature phase when a circular ring is established around the TC center. This suggests that TC intensity and intensification are determined from the behavior of mesovortices and their capability to transport moisture to the upper troposphere. Preliminary numerical-experiment results indicate that the impact of ocean waves and the sea state on TC intensity and intensification is less than the impact of SSC and is equivalent to the impact of oceanic
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
viii
Morena J. Acosta
preexisting conditions, which is more remarkable than the impact of short-term oceanic variability such as diurnally varying sea-surface temperature. There are technologies under development today for carbon capture and storage, in order to create carbon dioxide (CO2) neutral emissions from fossil fuels, mainly coal. Such technologies may be realised within ten years from now, but these technologies will most probably suit very large combined heat and power (CHP) plants, since large investments are expected and plant efficiencies are likely to drop by approximately 10%. The global warming will eventually lead to substantial changes in the world’s climate, which will, in turn, have a major impact on human life and the environment. Cogeneration plants fuelled using waste gases provide an economic and environmentally friendly way of helping to satisfy the heat and power needs of industry or a community. This study has explored the use of waste fuels, explaining some of the main considerations necessary to ensure the cogeneration plant provides the required heat and power in a reliable and efficient manner. The renewable energy technologies (RETs) are particularly suited for the provision of rural and urban power supplies and a major advantage is that equipment such as flat plate solar driers, wind machines, etc. RETs can be constructed using local resources and without the advantage results from the feasibility of local maintenance and the general encouragement such local manufacture, gives to the build up of small-scale rural based industry. Chapter 2 gives some examples of small-scale energy converters, nevertheless it should be noted that small conventional, i.e., engines are currently the major source of power in rural areas and will continue to be so for a long time to come. There is a need for some further development to suit local conditions, to minimise spares holdings, to maximise interchangeability both of engine parts and of the engine applications. Emphasis should be placed on full local manufacture. The adoption of green and/or sustainable approaches to the way in which society is run is seen as an important strategy in finding a solution to the energy problem. The key factors to reducing and controlling CO2, which is the major contributor to global warming, are the use of alternative approaches to energy generation and the exploration of how these alternatives are used today and may be used in the future as green energy sources. Even with modest assumptions about the availability of land, comprehensive fuel-wood farming programmes offer significant energy, economic and environmental benefits. These benefits would be dispersed in rural areas, where they are greatly needed and can serve as linkages for further rural economic development. The nations as a whole would benefit from savings in foreign exchange, improved energy security, and socio-economic improvements. This chapter discusses a comprehensive review of renewable energy sources, environment and sustainable development. This includes all the renewable energy technologies, materials and their development, energy efficiency systems, energy conservation scenarios, energy savings and other mitigation measures necessary to reduce climate change. The ocean is an energy source for developing tropical cyclones (TCs) that originate over the tropical oceans. Warm water and winds are crucial factors for determining heat and moisture fluxes from the ocean to the atmosphere. These fluxes are closely associated with cumulus convection and large-scale condensation due to latent heat release in the upper troposphere. Both physical processes are essential for increasing the upper tropospheric warm-core temperature around a TC. Therefore, warm water over the tropical oceans is required to generate and intensify TCs. Recently, tropical cyclone heat potential (TCHP), a measure of the oceanic heat content from the surface to the 26°C-isotherm depth, is frequently used for monitoring TC activity in global oceans, particularly in the Atlantic and
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Preface
ix
western North Pacific. Recent studies have reported that TC intensity was correlated with accumulated TCHP (ATCHP), calculated as a summation of TCHP every six hours from TC genesis upon first reaching categories 4 and 5 of the Saffir-Simpson scale, as well as seasurface temperature (SST) and TC duration. This implies that both SST and upper ocean stratification such as temperature, salinity, and mixed-layer and seasonal-thermocline depths play crucial roles in determining TC intensity and intensification. Conversely, TCHP can be varied by mixed-layer deepening and Ekman pumping induced by TC passage through TCinduced sea-surface cooling (SSC). The SSC is evidence that the ocean energy is consumed for developing and sustaining TCs. In that sense, a climatological map of TCHP distribution is valuable for acquiring the potential of TC activity. A 44-year mean climatological TCHP distribution in the North Pacific indicates that TCHP is locally high in the Southern Hemisphere Central Pacific (SCP) and Western North Pacific (WNP). TCHP varies on interannual and decadal time scales and is related to TC activity. The relatively low TCHP in the WNP is associated with an increase in the total number of TCs. This may indicate that low TCHP is caused by the frequent TC-induced SSC. When an El Niño event enters the mature phase, it leads to an increase in the number of super typhoons corresponding to categories 4 and 5. The increase in the number of super typhoons is related to an increase in ATCHP due to the trend of long-duration TCs. Chapter 3 addresses the benefits of TCHP as a useful ocean-energy parameter for monitoring interannual and decadal variability of TC activity in the global ocean. Solar radiation as a primary daytime source of energy and daylighting needs to be specified for practical purposes. Extraterrestrial parallel beam irradiance and illuminance defined by the Solar Constant and Luminous Solar Constant serve as a world-wide representation of maximal availability reaching the Earth. Their time-corrected horizontal values can serve as momentary normalizing levels for sunlight and skylight availability in any location specifying daily or yearly changes. Direct, parallel sun-beam illuminance at ground level is reduced due to transmittance losses, scattering and reflection caused by atmospheric content, e.g. turbidity/aerosol, pollution and cloudiness. These effects will be defined by influential broad-band parameters and illuminance measurements. Atmospheric scattering approximated by indicatrix and gradation functions based on measurements by sky luminance scanners were analyzed for the application to quasi-homogeneous sky models with typical luminance patterns occurring world-wide which were already adopted in the ISO/CIE standard. Diffuse and global illuminance levels were studied under different daily, seasonal and yearly courses with examples of local regular measurements gathered at the Bratislava CIE IDMP station. Measured data were analyzed using parameters and methods suitable for availability evaluations. The intention is to stimulate simple regular illuminance/luminance recording at meteorological stations to document local daylight availabilities. Examples analyzing daylight climate in different regions were documented with the aim to define local sunlight and skylight availability changes or territorial distribution applying sunshine duration data. Recommended descriptors and determining influences on daylight climate, their interrelations and approximation formulae for computer studies are presented. Possible graphical representation of real situations and their actual changes are shown in Chapter 4. In Chapter 5, growth rates of world population, world Gross Domestic Product (GDP) and total wealth created for the preceding 10,000 years have been calculated and extrapolated
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
x
Morena J. Acosta
though the 21st Century based on various scenarios in order to assess the potential environmental impact of increasing world population and consumption throughout this century. The results demonstrate that between 8 and 26 times more wealth will be created in the 21st Century than in the whole of the preceding human history depending on assumptions regarding the growth rates of world population and world GDP. These calculations show for the first time the unprecedented increase in resource consumption that will occur in the 21 st Century compared with that in the preceding 10,000 years of human history. This increase will result in a massive environmental deficit by the turn of the century and implies that we are on course to overwhelm the natural environment on which we depend for our tenure of this planet within this century. This will pose severe problems for the enhanced world population anticipated later this century. In this situation, it will be necessary to moderate our lifestyles in an attempt to achieve a more sustainable development of the environment. Unless vigorous steps are taken to curtail population growth, resource consumption and global CO2 emissions, human prospects for the 21st Century and beyond do not look particularly encouraging. Crude oil, coal and gas, known as fossil fuels, are the main sources of world energy supply. Even though worldwide research has been conducted into other renewable energy resources to replace fossil fuels, the global energy market will continue to depend on fossil fuels, which are expected to satisfy approximately 84% of energy demand in 2030. Views about the reserves of fossil fuels differ. To date there is no scientific consensus on when nonrenewable energy will be exhausted. Based on available reserve data and methods, coal will be the only remaining fossil fuel after 2042 and will be available until 2112. The world reserve of fossil fuels mainly depends its consumption and prices. The trend of fossil fuel consumption over the last couple of decades has shown an upward tendency and it is expected to continue until at least 2030. Current predictions indicate that oil will be the main fuel supply of energy until 2030 with a decline in consumption followed by coal and gas. While nominal prices for fossil fuels have followed an escalating trend, real prices have individually fluctuated. Forecasting future fossil fuel prices is uncertain because it is difficult to consider all the significant variables as well as the political implication in a price forecasting models. Chapter 6 individually reviews reserves, demand, supply, and prices of fossil fuels. Subsequently it predicts and comments on the future expectations for fossil fuels as a main source of world energy supply by considering its expected reserves, prices and environmental barriers for their usage. Usually, power distributors estimate the energy consumption based on the historical values of the consumption alone. This consideration, however, tends to compromise the accuracy of predicted values; particularly in areas like the Amazon region, that are very susceptible to climate and economic variations. With this in mind, an useful tool, for the power suppliers, could be made available to allow establishing metrics for measuring the impact that other random variables (e.g. economic and climatic) have on the variation of the energy consumption; so that it would be possible to predict scenarios and the progression of their behavior, in order to achieve a more economic, safe and reliable setting for the supplier. Chapter 7 presents a new model, based on mathematical and computational intelligence techniques, in order to meet these needs. Particularly, the authors considered the peculiarities of regions like the Amazon. The contributions of this work are threefold: first, with respect to the establishment of correlations among economic, climate and the energy consumption data, by using Bayesian networks (BN); second, a model is introduced to explore the discovery of
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Preface
xi
scenarios, implementing a hybrid algorithm, combining genetic algorithms and Bayesian inference, thus allowing decision-makers to estimate which economic conditions favor the occurrence of a given target of energy consumption; third, with respect to the forecasting of the consumption by developing a new model for temporal analysis of the envisioned scenarios and inferences, which applies probabilistic Bayesian models with Markovian driven temporal analysis. From the models developed, it was possible to create a complete decision support environment for managers of the power suppliers, providing means to establish more advantageous energy contracts in the future market and analyze the favorable scenarios based on the climatic variations and the social and economic conditions of a given region. In recent years, heterogeneous dual-core embedded real-time systems, such as personal digital assistants (PDAs) and smart phones, have become more and more popular. In order to achieve real time performance and low energy consumption, low power scheduling for such systems becomes a critical issue. Most of researches on low power scheduling with dynamic voltage scaling (DVS) were targeted at only one CPU or homogeneous multi-core systems. In Chapter 8, the authors propose a low power scheduling algorithm called Longer Common Execution Time (LCET) for DVS enabled heterogeneous dual-core embedded real-time systems, which includes two steps. First, the authors reduce the total execution time of tasks by using LCET in heterogeneous dual-core embedded real-time systems. Second, the authors exploit the reduced total execution time to adjust voltage and frequency levels to further reduce the total energy consumption. Simulation results show that the proposed P-LCET (a preemptive version) and NP-LCET (a non-preemptive version) can effectively reduce the total energy consumption by 8% and 16% ~ 25% (13% and 33% ~ 38%) compared with the work by Kim et al. with (without) dynamic voltage scaling. Chapter 9 touches on a fundamental aspect of wind energy calculation, and goes a head to formulate three expressions of wind power. The paper attempts to answer the question whether the kinetic energy of a unit mass per second is 1/2, 1/3, or 2/3 multiplied by v3. The answer to this question is of importance for fluid dynamic considerations in general. The classical formulation of wind energy for turbines is based on the definition of the kinetic energy due to the wind impinging on the turbine blades. The expression of wind energy obtained is directly related to half (1/2) of the specific mass, , multiplied by the cube of wind velocity. Usually the assumption used is that the mass is constant. However, by changing this condition, different results arise. The approach by Zekai [1] based first on the basic definition of force and then energy (work) reveals that the same equation is valid but with 1/3 instead of a factor 1/2. In his derivation, Zakai [1] has not given any reason as to why a factor 2/3 which can be obtained using his approach is not acceptable. The authors advance arguments to show that three expressions of wind energy are possible through physical formulation. Over the last years, the issue of reducing energy consumption in embedded system applications has received considerable attention from the scientific community, since responsiveness and low energy consumption are often conflicting requirements. Moreover, embedded devices may also have timing constraints, in the sense that not only the logical results of computations are important, but also the time instant in which they are obtained. In this context, Chapter 10 presents a methodology applied in early design phases for supporting design decisions on energy consumption and performance of embedded system applications. The proposed methodology adopts a formalism for modeling the functional behavior of hardware architectures at a high-level of abstraction. It considers an intermediate model which represents the system behavioral description and, through the composition of these
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
xii
Morena J. Acosta
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
basic models, the scenarios have been analyzed. The intermediate model is based on Coloured Petri Net, a formal behavioral model that not only allows the software execution analysis, but it is also supported by a set of well established methods for property verifications. In addition, this chapter also presents ALUPAS, a software developed for estimating energy consumption and execution time of embedded systems. ALUPAS can provide important insights to the designer about the battery lifetime as well as parts of the application that needs optimization. Lastly, real case studies as well as customized examples illustrate the applicability of the proposed methodology in which non-specialized users do not need to interact directly with the Petri net formalism. It is also important to highlight that pieces of codes that are either energy or timing consuming were also identified. Moreover, the simulations provide accurate results with much smaller computational effort than measurements on the hardware platform.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
In: Advances in Energy Research, Volume 1 Editor: Morena J. Acosta, pp. 1-67
ISBN: 978-1-61668-994-0 © 2010 Nova Science Publishers, Inc.
Chapter 1
TROPICAL CYCLONE-OCEAN INTERACTION: NUMERICAL STUDIES Akiyoshi Wada* Meteorological Research Institute, Tsukuba Ibaraki, Japan
Abstract
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Tropical cyclones (TCs) are among the most feared and deadly weather systems on Earth, and cause tremendous damage and loss of life due to strong winds, torrential rainfall, and storm surges. Such catastrophic natural disasters seriously affect socioeconomic activity and should be mitigated. An energy source of TCs is moisture provided from the ocean. Here we describe how TCs interact with the ocean using results of numerical experiments, simulations, and predictions by an ocean general circulation model and nonhydrostatic atmosphere-wave-ocean coupled model from meteorological and oceanographic points of view. A representative feature of the ocean response to a TC is TC-induced sea-surface cooling (SSC). SSC is mainly caused by vertical turbulent mixing and Ekman pumping, which depend on the traveling speed of a TC. Vertical turbulent mixing is a one-dimensional process for efficiently entraining cool water in the seasonal thermocline into the mixed layer. Shear-induced turbulent kinetic energy (TKE), TKE generation due to wave breaking, and energy dissipation play crucial roles in determining the entrainment rate. In contrast, Ekman pumping is a three-dimensional process and occurs beneath and behind a TC. It helps raise the seasonal thermocline and transport cool water upward, increasing oceanic primary production. SSC leads to suppression of TC intensification, resulting in TC weakening and structural change during the intensification phase when a few kilometer-scale mesovortices formed within a 100km-scale cyclonic circulation are suppressed. In contrast, the negative impact of SSC on TC intensity and its structural change is not significant during the mature phase when a circular ring is established around the TC center. This suggests that TC intensity and intensification are determined from the behavior of mesovortices and their capability to transport moisture to the upper troposphere. Preliminary numerical-experiment results indicate that the impact of ocean waves and the sea state on TC intensity and intensification is less than the impact of SSC and is equivalent to the impact of oceanic preexisting conditions, which is more remarkable than the impact of short-term oceanic variability such as diurnally varying sea-surface temperature.
*
E-mail address: [email protected], TEL: +81-29-852-9154 FAX: +81-29-853-8735. (Corresponding author)
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
2
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
1. Introduction A tropical cyclone (TC) is a kind of vortex with a radius of a several hundred kilometers. The energy source for TC formation and intensification is the warm ocean in tropical or subtropical basins. Understanding the mechanisms of TC formation and intensification, including how and to what extent the ocean energy is utilized, is essential for numerical models to precisely predict TCs. In addition, accurate initial atmospheric and oceanic fields are required as initial conditions for a numerical model to precisely predict TCs. However, it is very difficult to directly obtain high-quality observation data around a TC due to its strong wind, heavy rainfall, and high waves. Even though recent innovations of satellite observations enable us to monitor global atmospheric and oceanic fields, satellite observations always confront the problem that thick cumulonimbus clouds around a TC inhibit the surface observations. Recent developments of the technologies associated with in situ observations provide us scarce and valuable data around a TC, but observational opportunities are limited. That is why researchers are eager to look at the three-dimensional structure of a TC and the underlying ocean and their evolution. Numerical study is one of the approaches for reproducing the three-dimensional structure of a TC and the underlying ocean. Current numerical studies associated with TC evolution have progressed due to the recent development of super-computer technologies. These studies enable us to realistically simulate a three-dimensional structure of a TC and the underlying ocean and their evolution by a sophisticated TC-wave-ocean coupled model [1]. A coupled model is now generally utilizable for numerical experiments, simulations, and predictions of TCs. Here, we are expanding the definitions of the terms “experiment,” “simulation,” and “prediction” for use in the present chapter. The term “experiment” includes idealized and simplified studies for understanding the essence of TC genesis, intensification, and structural change. The term “simulation” indicates the reproduction of TCs using given initial and boundary conditions. A simulated TC is more realistic than an idealized TC calculated in a numerical experiment. The term “prediction” means that we can predict the TC evolution in advance, but the evolution does not always occur in the real world. We need to improve TC predictions for mitigating natural disasters caused by them. Previous numerical studies (including numerical experiments, simulations and operational predictions) have been implemented under a given sea-surface temperature (SST) fixed as a surface-boundary condition during the integration. The fixed SST leads to the overdevelopment of a TC as the integration progresses [2-3]. Overdevelopment is a serious problem particularly for numerical predictions, and is caused by the lack of sea-surface cooling (SSC) induced by a TC [2-3]. SSC depends on not only a TC’s translation speed and intensity but also the mixed-layer depth and vertical gradient of sea temperature in the seasonal thermocline [4-5]. When a TC passes over a warm-core eddy, little SSC occurs due to its deep mixed layer, resulting in the sustenance or the development of a TC without suppressing TC intensification [6-11]. In contrast, recent studies reported that SSC reached up to 6 C [12] and sometimes up to 9 C [13]. The essential dynamics and physics process for sea-surface temperature (SST) reduction are Ekman pumping and vertical turbulent mixing, resulting in an enhanced chlorophyll-a (chl-a) concentration [14,15]. The increase in chl-a usually lasts two to three weeks before
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
3
chl-a returns to the normal pre-TC level [15] even though a decrease in SST is restored depending on the magnitude of wind speed and short-wave radiation [16]. Most studies associated with the impact of a TC on SSC have been performed numerically with a horizontal grid-spacing of 10km or much coarser [3, 17-21] even though a horizontal grid spacing of 1 to 2km or less is required for resolving the inner core of a TC [1]. These studies proved that SSC resulted in significant decreases in turbulent heat fluxes and static moist energy within a radius of nearly 100km from the TC’s center [19]; remarkable changes in horizontal distributions of upward and downward flows, low-troposphere convergence, and upper-troposphere divergence [19]; and precipitation from axisymmetriclike to asymmetric-like patterns [20]. From an energy point of view, the inner core of a TC is formed where isothermal lowertropospheric inflow, isothermal upper-tropospheric outflow, and adiabatic expansions occur, assuming that a TC is approximately expressed as a simple Carnot heat engine [22-25]. Heating the atmosphere through the isothermal process and cooling the atmosphere as the pressure is reduced are considered to be important for converting heat energy into mechanical energy. However, TC intensification is not fully explained by a simple Carnot cycle because of uncertainty of physical processes and the environmental complexity in the real atmosphere and ocean. This chapter addresses the role of the ocean in determining TC intensity and intensification, focusing on the physical, chemical and biological responses to a TC; the atmospheric response to oceanic environments; and the impacts of local SSC and sea state on a weather-forecasting time scale from a numerical point of view.
2. Numerical Models Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
2.1. Mixed-Layer Ocean Model A mixed-layer ocean model [5, 26-28] is formulated based on the Boussinesq and hydrostatic approximations. The origin of the coordinate system of the mixed-layer model is located at the undisturbed sea surface and downward is positive in the vertical coordinate. An important feature of the mixed-layer model is that entrainment occurs at the interfacial zone between the mixed layer and seasonal thermocline. We assume that the exchanges of mass, heat, and momentum occur through vertical turbulent mixing that produces the entrainment at the interfacial zone. The bulk of the energy fed into the mixed layer from the atmosphere is trapped at the interface. Most of the energy is dissipated by wave breaking within the mixed layer, and an insignificant amount of the energy is radiated to the seasonal thermocline. Therefore, we can assume that the turbulence caused by atmospheric forcing of a TC is confined within a mixed layer and a discontinuous surface is regarded as an infinitely thin interfacial zone. The turbulence within a mixed layer entrains mass, heat, and momentum across the mixed-layer base. The several types of entrainment formulation previously proposed can be roughly divided into two types: the dynamic instability model (DIM) and the Kraus-Turner’s type [27,29]. This chapter uses the entrainment formulation including both the DIM and the Kraus-Turner’s type [28]. Since heat does not penetrate further down in the oceanic interior, we assume that the temperature at the base of the seasonal thermocline and the bottom remain unchanged in the
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
4
Akiyoshi Wada
mixed-layer model. Also, horizontal diffusion and bottom friction are neglected. Accordingly, the main target region is the outer continental shelf and deep ocean, although we use the mixed-layer ocean model around the coastal region when the TC-ocean coupled model is implemented. The prognostic variables are the layer thicknesses hi, horizontal current velocity vi in each layer (i=1, N where N is the number of model layers), mixed-layer temperature including SST (T1), mixed-layer salinity (S1), temperature at the top of the seasonal thermocline (T2), and salinity at the top of the seasonal thermocline (S2). Their evolutions are governed by the equations of continuity, momentum, and heat balance. In addition, the equation of state [30] is used for relating the density profile to temperature and salinity profiles. The equations for the layer thicknesses are
∂h1 + ∇ ⋅ (h1 v 1 ) = we ∂t ,
(1)
∂h2 + ∇ ⋅ (h2 v 2 ) = − we ∂t ,
(2)
∂hi + ∇ ⋅ (hi v i ) = 0 (i = 3, N ) , ∂t
(3)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
where the subscript indicates the number of the layer and we is the rate of entrainment across the mixed-layer base. The momentum equations are
∂h1 v 1 u tan φ ⎞ τ 1 ⎛ + ∇ ⋅ (v 1 h1 v 1 ) + ⎜ 2Ω sin θ + 1 + we v 2 , (4) P1 + ⎟k × h1 v 1 = − ∂t a ⎠ ρ0 ρ0 ⎝ ∂h2 v 2 u tan φ ⎞ 1 ⎛ + ∇ ⋅ (v 2 h2 v 2 ) + ⎜ 2Ω sin θ + 2 P2 − we v 2 , ⎟k × h2 v 2 = − ρ0 ∂t a ⎠ ⎝
(5)
∂hi v i u tan φ ⎞ 1 ⎛ + ∇ ⋅ (v i hi v i ) + ⎜ 2Ω sin θ + i Pi , ⎟k × hi v i = − ∂t a ⎠ ρ0 ⎝
(6)
where τ is the wind stress vector, a is the radius of the Earth (6370km), Ω is the angular velocity of the Earth’s rotation (7.2 x 10-5 s-1), φ is the latitude, ρ0 is the reference density (1023kg m-3), and Pi is the gradient of pressure integrated over the ith layer defined as [31] ⎛ i −1 ∂Pi ⎞ g − bi N h ⎛ ∂Pi ⎟= ∇ ∑ hi bi + bi ∇⎜⎜ ∑ h j + i Pi = ⎜⎜ hi , hi ⎟ a∂φ ⎠ g 2 i =1 ⎝ c∂λ ⎝ j =1
⎞ ⎛ i −1 hb ⎟ − ∇⎜ ∑ h j b j + i i ⎟ ⎜ 2 ⎠ ⎝ j =1
⎞ ⎟ + Pa , ⎟ ⎠
(7)
where λ is the longitude, c = a cos φ , g is the acceleration due to gravity, Pa is the gradient of sea level pressure and b is the buoyancy defined as Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
bi = − g
(ρ i − ρ 0 ) ρ0
,
5 (8)
where ρi is the density at the ith layer. The equations for temperature at the mixed layer and at the top of the thermocline are
Q ∂h1T1 + ∇ ⋅ (v 1 h1T1 ) = tot + we (T1 − T2 ) ∂t ρ 1C p
(9)
∂h2T2 + ∇ ⋅ (v 2 h2T2 ) = −2we (T1 − T2 ) ∂t
(10)
where Cp is the specific heat at constant pressure (4.2kJ kg -1 K-1) and Qtot is the total heat flux defined as the summation of short-wave absorption, long-wave radiation, and sensible and latent heat fluxes. The equations for salinity at the mixed layer and at the top of the thermocline are
∂h1 S1 + ∇ ⋅ (v 1 h1 S1 ) = S1 ( E − P) + we (S1 − S 2 ) ∂t
(11)
∂h2 S 2 + ∇ ⋅ (v 2 h2 S 2 ) = −2 we (S1 − S 2 ) ∂t
(12)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
where E indicates evaporation and P indicates precipitation integrated during a time step. Next, we briefly describe a modified Deardorff’s entrainment formulation [28, 32]. First, frictional velocity u* and Deardorff’s free-convectional velocity w* for the convective mixed layer [32] are defined. 2
u* = u ' w' + v' w'
{(
1 2 4
)
τ ρ
=
}
w* = g ρ ⋅ w' ρ s ' ⋅ h
1 3
(13)
(14)
Here, the overbar indicates the horizontal average over an area extending much farther than h12, the prime indicates the local deviation from the mean, u and v are the latitude and longitude components of horizontal velocity, w is the vertical velocity (downward is positive in the z coordinate), and ρ is the density of sea water. Subscript s indicates a value evaluated near the surface. The horizontally homogeneous turbulent kinetic energy (TKE) equation is
(
)
⎞ ∂V ⎛ ∂ ⎞ ⎛ q 2 ∂ q2 / 2 ⎛ g ⎞ − ⎜ ⎟ w' ⎜⎜ + p ' ρ ⎟⎟ − ε , = ⎜⎜ ⎟⎟ w' ρ ' − V ' w' ⋅ ∂z ⎝ ∂z ⎠ ⎝ 2 ∂t ⎠ ⎝ρ⎠ Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
(15)
6
Akiyoshi Wada
where q2 is twice the TKE, p is the pressure, and ε is the dissipation rate. This equation is integrated across the interfacial zone between the mixed layer and seasonal thermocline from hi1 to hi2 to give
⎛ q2 ⎞ 1 2 ∂hi 1 1 2 ∂hi 2 ⎛⎜ g ⎞⎟ ⎜⎜ + p ' ρ ⎟⎟ − ε it Δh , (16) − qi 2 =⎜ Δ − ⋅ Δ + qi 1 w ' ρ ' h 'w ' w ' V V it it ⎟ ∂t ∂t 2 2 ⎝ 2 ⎠ i1 ⎝ ρ it ⎠ where the subscript i1 denotes the value at the upper side of the interface, the subscript i2 that at the lower side, and the subscript it that at the interface. ΔV is the velocity difference across the interface; Δh is the thickness between the upper side of the interface and that at the lower side. The following parameterizations are introduced in Eq. (16). First, the left-hand side of Eq. (16), representing the TKE tendency, is
qi1
2
∂hi1 2 ∂hi 2 2 − qi 2 = c z qi1 we , ∂t ∂t
(17)
where c z = 1.2 is a constant. The first term on the right-hand side of Eq. (16), representing the buoyancy production, is
1 w' ρ i ' = − c4 we Δρ , 2
(18)
where c4 = 0.3 is a constant. Δρ is the density difference at the interface. The second term Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
on the right-hand side of Eq. (16), representing the interfacial shear production, is
1 V ' wi ' = − c4 we ΔV . 2
(19)
The third term on the right-hand side of Eq. (16), representing the turbulent transport, is
(
w' q 2 2 + p ' / ρ
)
i1
(
3
3
)
= c1 w* + (η 3 + m d )u* ,
(20)
where the parameter md = 300 for breaking surface waves induced by wind stress is the constant tuning parameter and c1 and η are expressed by empirical formulas as 3
c1 = 0.05 − 0.03 exp(−4 Riq ) ,
(
(21)
)
η 3 = 1.8 1 − 2 −1 2 ⋅ 2Ω sin φ ⋅ Vm h / u* 2 ,
(22)
where Riq is the critical Richardson number at the interfacial zone and Vm is the mean velocity within the mixed layer. The fourth term on the right-hand side of Eq. (16), representing the dissipation rate, is
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies 1 c4 Fv we (ΔV ) Δh
⎛c q 3
2
ε i = ⎜⎜ 3 i + ⎝ Δh 2
7
⎞ ⎟, ⎟ ⎠
(23)
where c3 = 0.7 is a constant and Fv is expressed as an empirical formulas as follows.
(
Fv = 0.93 exp − 0.35Riq
1.5
)
(24) 3
Inserting Eqs. (17), (18), (19), (20), and (23) into Eq. (16); dividing by qi ; and utilizing the definitions
⎫ ⎪ ⎪ 2 ⎪ ci ⎪ Ri* = 2 ⎪⎪ w* ⎬, 2 ci ⎪ Riv = 2 ⎪ (ΔV ) ⎪ 12 ⎛ g ⎞ ⎪ ci = ⎜⎜ Δρh ⎟⎟ ⎪ ⎝ ρi ⎠ ⎪⎭ Riτ =
2
ci 2 u*
(25)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
we obtain c4 (1 − Fv )( Riq / Riv ) ⎤⎛ we ⎞ 1⎡ 0 = − ⎢c z + c4 Riq − ⎥⎜⎜ ⎟⎟ Δh / h 2⎣ ⎦ ⎝ qi ⎠ ⎡ Riq ⎤ + c1 ⎢ ⎥ ⎣ Δh / h ⎦
32
(Ri
−3 2
*
+ η 3 Riτ
−3 2
(26)
)− c , 3
where Δh / h is evaluated from the following empirical formula.
Δh / h = 1 (1 + 1.9 Riq )
0.4
(27)
Equation (26) is solved by iteration. Before the iteration, an initial guess for the critical Richardson number Riq must be input to Eq. (26). The initial guess for Riq , namely Riq
(1)
,
is given by Riq
(1)
{ [
= a −1 0.1 Ri*
−3 2
+ (η 3 + md ) Riτ
]
−3 2 2 3
+ 0.05b Riv
}, −1
(28)
where a = 0.25 and b = 0.01 are constant values that depend on the kinds and types of atmospheric forcing. The detailed iteration procedure is described in reference [32]. The
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
8
Akiyoshi Wada
solution is uniquely obtained when the right-hand side of Eq. (26) changes sign. We define a solution when the residual in Eq. (26) is less than 10-7 or iterations exceed ten.
2.2. Diurnally Varying SST Scheme Sunny and calm conditions result in an SST peak and a highly stratified layer adjacent to the surface [33]. The thickness of the stratified layer is known to be less than a few meters. In the stratified sublayer, a diurnal cycle of sea temperature predominates over the other dynamic and thermodynamic processes [34]. Many kinds of numerical models and empirical schemes have been used for investigating diurnal variations of the upper ocean [33, 34]. However, no model or scheme can simulate the diurnal variations perfectly [33, 34]. A simplified scheme [35] is documented here for introducing the above-mentioned mixed-layer ocean model. This scheme is referred to as the Schiller and Godfrey (SG) scheme. Figure 1 presents a schematic diagram of the SG scheme. Depth h1 is the mixed-layer depth in the mixed-layer ocean model. Depth dz=Dt(t=0) is set to be 5m. A depth of 5 to 10m is appropriate for dz in the SG scheme [35]. z=DT(t) is the depth of the stratified sublayer as a function of time. The sublayer depth plays a crucial role in simulating diurnally varying SST. The range of DT(t) is 0 < DT(t) ≦ dz. When DT(t) > dz, DT(t) = dz. The equation for T1 in Eq. (9) is simplified to
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
∂T1 + adv(t ) − mix(t ) = ∂t
∂f ( z ) ∂z , ρC p
Qsw (t )
(29)
where Adv(t) indicates three-dimensional thermal advection and Mix(t), three-dimensional thermal mixing. Qsw(t) indicates the net short-wave radiation. f (z) is the rate of transmission of short-wave radiation at depth z. The parameterization of short-wave radiation transmission is described later. The value of Tbot is identified with that of T1. The total amount of turbulent heat flux and long-wave radiation is negligible when calculating the evolution of T1 because it is released only within a skin layer. When the sublayer vanishes (Dt(t) = dz), T1(t)=Tbot(t)=Ttop(t). The equation for T1 changes from Eq. (29).
∂T1 + adv(t ) − mix(t ) = ∂t
∂f ( z ) ∂z + Qsurf (t ) ρC p ρC p
Qsw (t )
(30)
Here, Qsurf represents the total amount of turbulent heat flux and long-wave radiation, indicating that the cool skin effect must be incorporated into the right-hand side of Eq. (29). In Eq. (30), downward (from the atmosphere to the ocean) is positive in QSW and QSURF. When QSURF is negative, SST decreases due to the cool skin effect. When the sublayer begins to form after sunrise, the total buoyancy ∆B is:
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies ⎛ αg ⎞ (QSW (t )(1 − f ( DT (t )) + QSURF (t )) + βgS1 ( P − E ) ⎟⎟Δt , ΔBΔt = ⎜ ⎜ ρC p ⎝ ⎠
9 (31)
where ∆t is the time step, α and β are the expansion coefficients for temperature and salinity, and S1 indicates the salinity at the sea surface. When ∆B is positive and the amount of short-wave radiation increases at time t0, DT(t0) is preliminarily determined. Tdiff is also preliminarily determined as the difference between Ttop and Tbot, Tdiff(t) = Ttop(t) – Tbot(t, DT(t)). Tdiff(t) can also be expressed as Tdiff (t ) =
I SW (t )G ( DT (t )) + I SURF (t ) , DT (t )
(32)
where t
I SW (t ) = ∫ QSW (t ' )dt ' , t0
(33)
and t
I SURF (t ) = ∫ QSURF (t ' ) dt ' . t0
(34)
G(z) is the amount of short-wave radiation absorbed at a depth of z. G(z) is expressed as
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
G( z) = 1 − f ( z) + z
∂f ( z ) . ∂z
(35)
It should be noted that time t is later than time t0, at sunrise when both Isw and Isurf are reset to zero. At depth of DT(t0),
QSW (t 0 )G ( DT (t 0 )) = −QSURF (t 0 ) .
(36)
We can calculate depth DT(t0) when G(DT(t0)) is positive. At that time, DT(t0) is estimated differently using the definition of the bulk Richardson number and a critical bulk Richardson number Ric = 0.65.
⎛ ρC p Ric I τ (t 0 ) ⎞ ⎟⎟ DT (t 0 ) = ⎜⎜ α ( ) gI t s 0 ⎝ ⎠
1
2
(37)
Is(t0) and Iτ(t0) are expressed as follows.
⎞ ⎛τ I τ (t 0 ) = ⎜⎜ Δt ⎟⎟ ⎝ρ ⎠
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
2
(38)
10
Akiyoshi Wada
I s (t 0 ) =
∂QSW (t 0 ) G (DT (t 0 )Δt 2 ) 2 ∂t
(39)
DT(t0) and G(DT(t0)) are calculated by using the parameterization of short-wave radiation transmission. The final value of DT(t0) is evaluated by the iteration method. It is noted that DT(t0) = dz when no sublayer is formed. Once the sublayer is formed at time t0, sublayer depth DT(t) is calculated in the following manner. First, G(DT(t’)) is calculated from DT(t’), where t’ indicates a time prior to time t (t0 < t’ < t). Next, Is(t’) is calculated from G(DT(t’)). DT(t’) is then calculated by Eq. (37). DT(t) is determined through iteration using DT(t’) and the parameterization of short-wave radiation transmission. When time t is within an hour of time t0, DT0(t0) is calculated by a similar iteration method as used for determining DT(t0). Under sunny conditions, the final value of DT(t) (DT*(t)) is *
DT (t ) = t d ⋅ DT (t ) + (1 − t d ) ⋅ DT 0(t ) ,
(40)
where td is the normalized local time and 0 < td < 1. If the atmosphere is cloudy and td > 0.5, *
DT (t ) = t d ⋅ dz + (1 − t d ) ⋅ DT (t ) .
(41)
From midnight until sunrise, *
DT (t ) = dz .
(42)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
When 0 < DT(t) < dz, ∆Ttop(t) is determined from the following equation.
ΔTTOP (t ) =
I SW (t ) ⎛ 1 − f (DT (t ) ) 1 − f (dz ) ⎞ I surf (t ) ⎛ 1 1⎞ ⎜⎜ ⎟⎟ − ⎜⎜ − − ⎟⎟ . dz ρC P ⎝ DT (t ) ⎠ ρC P ⎝ DT (t ) dz ⎠
(43)
In the sublayer scheme, parameterization of short-wave radiation transmission plays a crucial role in determining SST. RSH, the rate at which short-wave radiation increases the upper-ocean temperature at depth DT(t) is
RSH =
Q0− − Q DT ( t )
ρC p DT (t )
,
(44)
where Qo- is the total net flux of short-wave radiation just beneath the sea surface (z=0) and QD (t) is the total net flux at depth of DT(t) (Figure 1). A change in shortwave radiation flux within depth DT(t) can be expressed with a short-wave transmission parameter (Tr ) as T
Q0− − Q DT ( t ) = Q0 + [Tr (0−) − Tr ( DT (t )] ,
(45)
where Q0+ is the total short-wave irradiance incident on the sea surface. The solar transmission (Tr) is defined as Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
Tr ( DT (t )) =
QDT ( t ) Q0+
n
≅ ∑ Ai ⋅ exp(− K i ⋅ DT (t )) .
(46)
i =1
The average short-wave radiation flux QD
T
Q DT ( t ) =
11
(t )
for a layer of depth DT(t) is calculated as
DT ( t ) 1 (Q0− − Q z ) dz , ∫ DT (t ) 0
(47)
where Qz is the total net flux of short-wave radiation at depth z (0 < z < DT(t)). The shortwave radiation parameterization is incorporated into the SG model. 4 ⎤ ⎡ 4 −1 GDT (t ) = ⎢∑ Ai ⋅ DT (t ) − ∑ Ai ⋅ K i (1 − exp(− K i ⋅ DT (t ) ))⎥ DT (t ) i =1 ⎣ i =1 ⎦
(48)
The coefficients and parameters Ai and Ki are expressed as follows.
y = c1 ⋅ Chl + c 2 ⋅ CI + c3 ⋅ cos −1 θ s + c 4
(49)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Here, Chl is the chlorophyll-a concentration (mg m-3), CI is the cloud index (0 < CI < 1), and θs is the solar zenith angle. y represents the fitting parameters for Ai and Ki,, and C1 to C4 are linear regression coefficients [36]. Coefficients C1 to C4 differ between clear and cloudy skies [36].
Figure 1. Schematic diagram of SG scheme.
2.3. Ocean General Circulation Model The Meteorological Research Institute Community Ocean Model (MRI.COM) [37] and Noh and Kim’s mixed-layer scheme [38] are used as an ocean general circulation model. The
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
12
Akiyoshi Wada
formulations in MRI.COM are not presented here but can be found in a technical note if the reader is interested in the details [37]. The MRI.COM used in the present study has a total of 54 levels in the hybrid vertical coordinates of geopotential and normalized geopotential heights. In particular, 17 levels are allocated above 200m depth, and six layers are allocated in the upper 15m depth to better resolve upper-ocean dynamics and thermodynamics. Ocean bottom topography is provided from the Sandwell’s realistic topography dataset [39]. Smagorinsky-type biharmonic viscosity [40], isopycnal diffusion [41], isopycnal thickness diffusion [42], and background vertical diffusivity [43] are used for calculating dynamics and thermodynamics. Paulson-Simpson’s water-type I [44] is employed for estimating short-wave radiation transmission or absorption in the upper ocean. We employ two versions of MRI.COM: the North-Pacific version and the regional version. The computational domain for the North-Pacific version covers 15°S to 65°N, 100°E to 105°W with a horizontal grid spacing of 0.25°. The computational domain for the regional version covers 10°N to 50°N, 120°E to 160°E with a horizontal grid spacing of 0.25°. The 0.25° resolution is sufficient to resolve the eye and the radius of maximum wind speed of TCs. The North-Pacific version of MRI.COM is run for two months from a horizontally uniform stratification while nudging temperature and salinity to the daily mean fields. The daily mean fields are obtained from the daily mean dataset reanalyzed by the Meteorological Research Institute multivariate Ocean Variational Estimation (MOVE) system [45]. The regional version of MRI.COM is then run without nudging temperature and salinity to the daily mean field. In the first and second runs, the National Centers for Environmental Prediction (NCEP) – the Department of Energy (DOE) Atmospheric Model Intercomparison Project reanalysis data (NCEP R2) [46] are used for atmospheric forcing without introducing an artificial TC-like vortex. The ocean initial and boundary conditions are provided from the results of the second run. The regional version of MRI.COM is subsequently run again after providing atmospheric forcings in which an artificial TC-like vortex is embedded and oceanic boundary conditions with the MRI.COM for realistically simulating the oceanic response to a TC.
2.4. Regional Atmospheric Model Atmospheric models are roughly divided into two types: hydrostatic and nonhydrostatic models. The difference between hydrostatic and nonhydrostatic models is simply whether or not the hydrostatic approximation is used. In general, the horizontal grid spacing, physical schemes, and so on are quite different between them. From a downscaling point of view, the numerical result calculated by a hydrostatic model with a coarser horizontal resolution is often used for acquiring initial and boundary conditions in order to perform numerical studies using a nonhydrostatic model with a finer horizontal resolution. In that sense, complementary roles are assigned for each model. This chapter thus uses both models from the downscaling point of view. Atmospheric models can also roughly be divided into two other types: finite-difference and spectral methods from a computational point of view. The spectral model advantages include no pole problem due to the convergence of the meridians, an isotropic representation
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
13
when using triangular truncation resulting in enhanced stability and longer time steps, much smaller phase errors, no aliasing of nonlinear terms, and better respect of the conversation properties of continuous equations [47]. In contrast, there is an issue of computational efficiency when the horizontal resolution of models gets finer. The JMA typhoon model (TYM) is a kind of hydrostatic, regional spectral model [48]. Its computational domain covers 6480km square with a 321 square regular transform grid and a horizontal resolution of nearly 20km around a TC center [3], unlike the original routine model. The computational domain can be changed with respect to the central position of the targeted TC. TYM has a total of 25 vertical levels in σ-p coordinates. The mapping projection is automatically selected for each targeted TC position: either Mercator when the central position is south of 20°N or Lambert conformal when the central position is north of 20°N. TYM is formulated with equations of motion, mass conversation, specific humidity, and virtual temperature under the hydrostatic approximation. The typhoon vortex, which is a part of the atmospheric initial condition, is created by the typhoon bogus methodology before the numerical integration is implemented [48]. At that time, the pressure gradient near the TC center is arbitrarily weakened to prevent significant numerical shocks in the first step of TC numerical prediction. That is why the central pressure initially tends to be higher than that in the best-track central pressure reported by the Regional Specialized Meteorological Center (RSMC). A lateral boundary condition is required in order to drive the coupled model. In the present study, the lateral boundary condition is created by the hydrostatic, global spectral model (GSM) developed by JMA [48]. The physical processes incorporated in TYM are described in technical notes [48, 49].
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
2.5. Regional Nonhydrostatic Atmospheric Model When the horizontal resolution of an atmospheric model is less than 10km, the hydrostatic approximation is no longer valid. Atmospheric water substances change in phase, influencing the temperature and moisture around the atmosphere and changing its density locally. Temperature and moisture variations around a TC and its surrounding spiral bands are important for TC formation, development, and structural change, so a nonhydrostatic model is required for resolving a TC in the model. Our nonhydrostatic model has two-way triply-nested movable functions [50] and is coupled with a mixed-layer ocean model [51]. The nonhydrostatic model was reconstructed from a community model developed by the Japan Meteorological Agency (JMA) Numerical Prediction Division in partnership with the Meteorological Research Institute [50]. Dynamics and physical processes included in the nonhydrostatic model are older than those used in the operational JMA nonhydrostatic mesoscale model [52-55]. It should be noted that the operational JMA nonhydrostatic model does not have two-way triply-nested movable functions described in section 6.1. A nonhydrostatic atmosphere-wave-ocean coupled model described in section 7 has recently been developed based on the operational JMA nonhydrostatic model. The physical processes in the nonhydrostatic model include cloud physics expressed as an explicit three-ice bulk microphysics scheme [52, 56], a resistance law assumed for turbulent heat and momentum fluxes in the surface-boundary layer, exchange coefficients for momentum and enthalpy transfers over the sea determined from Kondo’s bulk formulas [57],
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
14
Akiyoshi Wada
a turbulent closure model in the atmospheric-boundary layer [58, 59], and an atmospheric radiation scheme [60]. The Kain-Fritsch convective parameterization scheme [61] is used in conjunction with this model to reproduce cumulus convection when the horizontal resolution exceeds 5km. An additional experimental design for each TC is described in each section.
Figure 2. Best track of Typhoon Rex in 1998 (open circles) and observation course of R/V Keifu Maru (triangles). The fifth-generation Geostationary Meteorological Satellite visible images at 0000 UTC 27 and 0000 UTC on 29 August 1998 (shaded circles) are shown together with the best track and the observation course. [From Wada et al., 2009]
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
3. Oceanic Response to TCS Vigorous oceanic responses to TCs are seen in the coastal and open oceans from dynamics and thermodynamics points of view. TC-induced surface waves are higher than 20m, TC-induced upper-ocean currents are stronger than 1ms-1, and SST decreases by up to several degrees [62]. A remarkable oceanic response is the formation of SSC along the track of a TC. We can see marked SSC in the wake of a TC from satellite radiometer data [63]. Oceanic dynamics and thermodynamics associated with the formation of SSC have been studied both theoretically and numerically [64-68] and indicate that the upwelling (Ekman pumping) and vertical turbulent mixing are dominant factors in TC-induced SSC. The role of upwelling in TC-induced SSC was addressed earlier from observational studies, while the importance of vertical turbulent mixing in TC-induced SSC was recognized later [28, 69-70]. Recent developments in ocean general circulation models [71-75] and observational technologies [76-78] have brought us a wider perspective of the oceanic response to TCs. In addition, a comprehensive study using in situ observations, satellite observations, and numerical modeling (data assimilation and numerical simulation) is possible for understanding the oceanic response to TCs. An example of a comprehensive study associated with the oceanic response to a TC is presented here for Typhoon Rex in 1998 [27, 28].
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
15
3.1. Observation
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 2 displays RSMC best-track positions of Rex from 0000 UTC 24 to 0000 UTC 31 August 1998, together with points of observation by R/V Keifu Maru. The evolutions of Rex’s best-track central pressure and translational speed are plotted in Figure 3. A tropical depression located south of Okinawa Island on 24 August 1998 developed into a typhoon on 25 August. The trajectory of Rex was like a trochoid (Figure 2) during the intensification and mature phases (Figure 3). Rex reached 960hPa on 27 to 29 August. The minimum central pressure was 955hPa when Rex’s traveling speed was nearly 1ms-1, the slowest during the lifecycle. The visible images of the Fifth-generation Geostationary Meteorological Satellite (GMS-5) demonstrate that Rex had a clear eye and a concentric eyewall at 0000 UTC 27 and 0000 UTC 29 August 1998 (Figure 2). Figure 2 also displays the locations of observations by R/V Keifu Maru of the Japan Meteorological Agency. Maritime and hydrographic conditions were observed around 20°N, 130°E, where the observational point had been a regular station for monitoring TC activity from 1970 to 1999. After finishing the observation at the fixed station, R/V Keifu Maru trailed Rex from 27 to 29 August and then logged across Rex’s track from 29 to 30 August (Figure 2). At that time, R/V Keifu Maru observed a sudden decrease in SST from 30°C to 27.2°C (Figure 4). As indicated in Figure 2, the sudden decrease in SST occurred after the passage of Rex.
Figure 3. Time series of best-track central pressure (hPa) and the traveling speed of Typhoon Rex in 1998. ‘A’ indicates the intensification phase including the integration at 75h, ‘B’ represents the recurvature phase, and ‘C’ designates the mature phase including the integration at 123h. [From Wada et al., 2009]
The Tropical Rainfall Measuring Mission (TRMM)/TRMM Microwave Imager (TMI) began to observe SST in December 1997. From TRMM/TMI SST data, we can see TCinduced SSC along the track of TCs [3, 5]. Horizontal distributions of SST deviations from those on 24 August are depicted on 27 August (Figure 5a), 29 August (Figure 5b), and 31
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
16
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
August (Figure 5c) using the TRMM/TMI three-day mean SST data. The SST deviations are remarkable on the southern side of Rex’s track on 27 August (Figure 5a). SST deviations continue to appear on the right side of Rex’s track on 29 August when Rex undergoes recurvature (Figure 5b). After the recurvature, SST deviations appeared behind Rex’s center when Rex moved slowly at a rate of nearly 1ms-1(Figure 5c). The sudden decrease in SST observed by R/V Keifu Maru (Figure 4) was produced around 26 August near 25°N, 135°E, and corresponded to the intensification phase with relatively fast speed exceeding 3ms-1 (Figure 3).
Figure 4. Time series of SST observed by R/V Keifu Maru from 0000 UTC 24 to 0000 UTC on 31 August 1998. [From Wada, 2005]
Next, we demonstrate how the SSC formation mechanism varies with Rex’s speed and phase. We address two representative cases, fast-moving and slowly-moving phases, using the results of numerical simulations by MRI.COM. We also address the “recurvature” phase, which occurs between fast-moving and slowly-moving phases. The phases are defined as follows. The fast-moving phase is from 1800 UTC 24 to 1800 UTC 27 August, the recurvature phase is from 2100 UTC 27 to 1500 UTC 28 August, and the slowly-moving phase is from 1800 UTC 28 to 1500 UTC 29 August 1998 (Figure 3). “Fast” and “Slowly” speeds of motion are roughly distinguished by using the phase speed of the first baroclinic mode [79] as the criterion.
3.2. SSC Formation Processes The numerical integration by MRI.COM starts at 0000 UTC on 24 August 1998 with an integration time of 168 hours (seven days) and a time step of 10 minutes. Atmospheric forcings required for the numerical simulation include a Rankine vortex presumed based on
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
17
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
RSMC best-track central pressure, maximum sustainable wind speed, the radius of 25ms-1 or 15ms-1 wind speed, and longitude and latitude every six hours. NCEP R2 Atmospheric forcings such as sensible and latent heat fluxes, solar and long-wave radiation, fresh-water flux, and momentum outside the Rankine vortex are used as atmospheric forcings for running MRI.COM.
Figure 5. Horizontal distributions of SST deviation from the TRMM/TMI three-day mean SST on 24 August 1998 to (a) the SST on 27 August, (b) the SST on 29 August, and (c) the SST on 31 August. Open and shaded circles and triangles represent the same as in Figure 2. [From Wada, 2005]
A horizontal distribution of simulated SST at 75h is presented in Figure 6a, and the horizontal distribution at 123h, in Figure 6b. Rex-induced SSC is remarkable on the southern side of Rex’s track at 75h (Figure 6a), corresponding to the pattern of SSC derived from TRMM/TMI three-day mean SST (Figure 5a). A sudden decrease in SST in Figure 4 had already occurred at 75h around 25°N, 135°E. Simulated SSC occurs behind Rex after Rex recurved (Figure 6b). The feature also corresponds to the pattern of SSC derived from TRMM/TMI three-day mean SST (Figure 5b). It should be noted that thick cumulonimbus
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
18
Akiyoshi Wada
clouds encompassing Rex (Figure 2) mask SSC underneath Rex’s central core (Figure 5b), even if the radiometer, the TRMM/TMI, is well-calibrated. At 123h, R/V Keifu Maru observes the area around 25°N, 135°E where a trail of SSC remains, corresponding to the sudden decrease in SST in Figure 4 (Figure 6c). As a whole, the result of numerical simulation successfully reproduces the patterns of SSC derived from TRMM/TMI three-day mean SST.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 6. Horizontal distributions of temperature (°C) and current components (ms-1) simulated by MRI.COM at (a) 75h, and (b) 123h. Contours are drawn for each 1°C. Typhoon symbols indicate the best-track position of Typhoon Rex in 1998; symbol ‘K’ indicates the positions of R/V Keifu Maru. [From Wada et al., 2009]
A horizontal distribution of simulated SST at 75h (Figure 6a) reveals the oceanic response to Rex during the fast-moving and intensification phases, while the horizontal distribution at 123h (Figure 6b) reveals the oceanic response to Rex during the slowlymoving and mature phases. Next, we address differences of the response of the upper ocean to Rex. Figure 7a depicts a longitude-depth section of temperature and current components at 24°N at 75h, and Figure 7b presents a latitude-depth section of temperature and current components at 141°E at 123h. Both sections show the profiles of temperature and current components along Rex’s track. During the fast-moving and intensification phases at 75h, the 21°C to 23°C isotherms, corresponding to a seasonal thermocline, right behind Rex locally reach nearly 100m depth at 138.25°E (see line “C” in Figure 7a). In contrast, the 26°C isotherm reaches an adjacent surface at the same location. An indication of Ekman pumping is found at 137.5°E where the wake corresponds to the location after the passage of Rex, and large SSC occurs at 137°E (see line “P” in Figure 7a), where a deepened mixed layer penetrates into the Ekman-pumping area. When Rex is in the slowly-moving phase, Ekman pumping is dominant where the center is around 28°N, 141°E beneath Rex’s central position (line “T” in Figure 7b). In addition, the rise of the seasonal thermocline represented by 22°C to 26°C isotherms around 26.75°N indicates upwelling. The upward current velocities in Figure 7a is small around 27.5°N to 28°N, 141°E (white square in Figure 7b) compared with the upward current velocities around
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
19
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
137.5°E in Figure 7a and around 26.75°N in Figure 7b. The other remarkable SSC occurs around 26°N and 141°E (hashed circle in Figure 7b) together with the deepening of the mixed layer where the area corresponds to the recurvature phase. Below the seasonal thermocline, near-inertial oscillation is seen on the 20°C isotherm, which is a typical oceanic response to a TC.
Figure 7. Vertical sections of temperature (°C) and current components (ms-1) along the traveling direction simulated by MRI.COM. (a) Longitude-depth section along 24°N at 75h. (b) Latitude-depth section along 141°E at 123h. Typhoon symbols indicate the best-track position of Rex for each integration time. [From Wada et al., 2009]
A latitude-depth section of temperature and current components at 138°E at 75h is displayed in Figure 8a, and a longitude-depth section of temperature and current components at 28°N at 123h is displayed in Figure 8b. Both sections show the profiles of temperature and current components across Rex’s track. During the fast-moving and intensification phases at 75h, Rex-induced Ekman pumping occurs around 23.5°N to 25.25°N from the surface to
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
20
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
200m depth. A seasonal thermocline represented by the 21°C to 23°C isotherms deepens to nearly 100m depth at 23.75°N, while SSC occurs due to local upwelling represented by the 25°C isotherm (hashed circle in Figure 8a). This feature is similar to lines “C” and “P” in Figure 7a. The local upwelling does not have a scale of a few hundred kilometers but at most a grid-scale (0.25°), indicating that the local upwelling may be caused by vertical turbulent mixing induced by shear instability.
Figure 8. As in Figure 7 except for across the traveling direction of Rex. (a) Latitude-depth section along 138°E. (b) Longitude-depth section along 28°N. [From Wada et al., 2009]
During the slowly-moving and mature phases at 123h, Ekman pumping is remarkable from the surface to 200m depth where the center is around 28°N, 141°E and is near Rex’s center (line “T” in Figures 7b and 8b). Remarkable SSC occurs around 28°N, 141.75°E
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
21
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
(hashed circle in Figure 7b) accompanied by deepening of the mixed layer, corresponding to the right side of Rex’s track. Unlike the vertical profiles in Figure 7, no inertial pumping induced by near-inertial currents is seen in Figure 8 [64].
Figure 9. Latitude-depth sections of TKE from the surface to 100m depth across (a) 138°E and (b) 141°E. Contour intervals are drawn for each unit (10-1 m2s-2). Typhoon symbols represent the best-track position of Rex at each integration time. [From Wada et al., 2009]
3.3. Turbulent Kinetic Energy and Upper-Ocean Stratification Latitude-depth sections of turbulent kinetic energy (TKE) at 75h and 123h calculated by the mixed-layer scheme [34] are presented in Figure 9. The section at 75h is across Rex’s track, while that at 123h is along the track. TKE is high on the right side of Rex’s track at 75h during the fast-moving and intensification phases. TKE is the highest near the surface due to
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
22
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
breaking surface waves, and some of the high TKE penetrates to 70m depth (hashed circle in Figure 9a) due to shear instability. During the slowly moving and mature phases, TKE is the highest behind Rex. The highest TKE appears around 40m depth and is probably caused by shear instability. The highest TKE areas are close to the areas where the local upwelling and mixed-layer deepening occur simultaneously. It should be noted that TKE is indeed small (Figure 9a) where the upwelling represented by the 25°C isotherm is remarkable (Figure 7a).
Figure 10. As in Figure 9 except for Brunt-Väisälä frequency squared from the surface to 200m depth. Contour intervals are drawn for each 0.2 unit (104 s-2). [From Wada et al., 2009]
Latitude-depth sections of the Brunt-Väisälä frequency squared at 75h and 123h calculated by the mixed-layer scheme [34] are depicted in Figure 10. High Brunt-Väisälä frequency squared indicates strong stratification, corresponding to the location of a seasonal thermocline. During the fast-moving and intensification phases at 75h, the upwelling raises the seasonal thermocline near Rex’s center (“S” in Figure 10a). In addition, the seasonal
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
23
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
thermocline becomes deep far behind Rex (“D” in Figure 10a), where SSC is remarkable after the passage of Rex. The remarkable SSC is caused by the combination of local upwelling and mixed-layer deepening (hashed circle in Figure 8a) where Ekman pumping occurs.
Figure 11. Horizontal distributions of TKE (gray scale for each 10-1 m2 s-2), shear-induced TKE (thick, solid contours for each 10-1 m2s-2), SSHA (thin, dashed contours for each 5cm) at (a) 0.5m depth at 75h, (b) 26m depth at 75h, (c) 0.5m depth at 123h, and (d) 26m depth at 123h. Typhoon symbols represent the best-track position of Rex, and arrows indicate Rex’s direction of travel at each integration time. [From Wada et al., 2009]
During the slowly-moving and mature phases at 123h, the upwelling raises the seasonal thermocline near Rex’s center, which is almost the same as the upwelling at 75h. Locally high Brunt-Väisälä frequency squared is seen from 80m to 100m depths where remarkable SSC occurs due to the upwelling together with the deepening of the mixed layer. In addition, the inertial pumping induced by the inertial oscillation leads to weak stratification in the upper ocean and locally deepens the seasonal thermocline around 26°N (hashed circle in Figure 10b) . In order to investigate SSC formation from another angle and clarify the relationships among vertical turbulent mixing, local upwelling, and Ekman pumping, we plot horizontal distributions of sea-surface height anomaly (SSHA), TKE, and shear-induced TKE at 0.5m (Figure 11a) and 26m (Figure 11b) at 75h and at 0.5m (Figure 11c) and 26m (Figure 11d) at 123h. The 26m depth corresponds to a spatial-temporal mean mixed-layer depth for a 6.25° square box collocated at Rex’s center averaged during the 168-hour integration. The SSHA is estimated from the deviation of sea-surface height (SSH) from the initial value on 24 August, as calculated by MRI.COM. We assume that negative SSHA includes the effect of upwelling [28].
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
24
Akiyoshi Wada
TKE exhibits a concentric pattern around Rex’s center during the fast-moving and intensification phases at 75h (Figure 11a). The TKE pattern is mainly caused by breaking surface waves. The areas of shear-induced TKE have a ring-shaped pattern at 0.5m depth, and are located northeast, southwest, and southeast of Rex’s position. The area of large shearinduced TKE on the rear-right side overlaps the area of negative SSHA. The ring-shaped pattern of TKE becomes indistinct at 26m depth (Figure 11b), where the area of shearinduced TKE appears to be dominant in the TKE field. The areas of large shear-induced TKE are located east, south, southeast, and northwest of Rex’s position. The area of large shearinduced TKE overlaps the area of negative SSHA on the rear side (hashed circle in Figure 11b), which is consistent with the maximum TKE in Figure 9a. During the slowly-moving and mature phases at 123h, TKE exhibits a concentric pattern around Rex’s center, which presumably reflects the ring-shaped area of shear-induced TKE (Figure 11c). The pattern is similar to that of TKE in the fast-moving phase (Figure 11a). Several regions of large shear-induced TKE exist around Rex’s center and almost overlap the area of reduced upper-layer thickness, except the one in front of Rex (hashed circle in Figure 11c). This markedly differs from the features during the fast-moving and intensification phases. In addition, the negative SSHA in the slowly-moving phase is about 10cm higher than that during the fast-moving and intensification phases, indicating that Ekman pumping more effectively influences the oceanic response to Rex. Most of the region of large TKE at 26m depth ahead of Rex’s center does not overlap the area of reduced upper-layer thickness. The areas of large shear-induced TKE, however, are spotted locally near Rex where the area of reduced upper-layer thickness is overlapped (hashed circle in Figure 11d). Near the spotted areas around 27°N, 141°E in Figure 11d, the mixed layer deepens due to enhanced TKE. This suggests that shear-induced vertical turbulent mixing helps entrain cool water transported by the local upwelling into the upper layer around the area where Ekman pumping is dominant. Therefore, shear-induced vertical turbulent mixing and local upwelling play important roles in forming SSC. We now explore to what extent the TKE production or dissipation term is dominant around Rex in the mixed-layer scheme. A TKE equation is written as follows:
∂E ∂B ∂U ∂U ∂ ⎛ ∂E ⎞ = KB +K + ⎜KE ⎟−ε , ∂t ∂z ∂z ∂z ∂z ⎝ ∂z ⎠
(50)
where the first term on the right side indicates buoyancy production, the second term indicates shear production, the third term indicates turbulent transport, and the fourth term indicates dissipation. U is the horizontal mean velocity, B is the mean buoyancy, E is the TKE, and K is the kinematical eddy viscosity. KB and KE are the eddy diffusivities for B and E. The amount of TKE production and dissipation in each term of Eq. (50) is estimated for a 6.25° square box collocated at Rex’s center. Hourly TKE production and dissipation of each term in Eq. (50) are output at all vertical levels during the 168-hour integration. A mixedlayer depth is defined as the depth at which a difference in density from the surface to a mixed-layer base is within 0.25kg m-3. TKE production by breaking surface waves is included in the turbulent transport term in Eq. (50) as a surface-boundary condition.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
25
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 12. Time series of vertically integrated terms in the TKE equation (Eq. 50). The vertically integrated terms include the TKE trend on the left-hand side of Eq. (50), buoyancy production/destruction, turbulent transport, shear production, and dissipation on the right-hand side in Eq. (50). [From Wada et al., 2009]
Turbulent transport and dissipation are the most dominant in Eq. (50) (Figure 12). The major TKE source transported by the turbulence is breaking surface waves. In contrast, the shear production is small compared with that of TKE induced by turbulent transport and dissipation (Figure 12). The result suggests that TKE production by shear instability is negligibly small in a 6.25° square box collocated at Rex’s center because the impact of breaking surface waves is dominant. However, shear-induced vertical turbulent mixing is essential for forming remarkable SSC induced by Rex.
4. Oceanic Biochemical Response to TCs The vertical turbulent mixing and upwelling affect the oceanic biochemical response to TCs [14, 15]. Although the effect of TCs on the upper-ocean thermal and physical structure is relatively well known, their impact on oceanic biochemical response is still unclear. An example of oceanic biochemical response is the air-sea CO2 transfer, particularly the processes associated with sudden variations of oceanic pCO2 caused by a TC [80-83]. First, we present observational evidence for sudden variations of oceanic pCO2 caused by TCs and then investigate its upper-ocean thermal and physical structure using the results of numerical simulation by MRI.COM [37].
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
26
Akiyoshi Wada
Figure 13. Best tracks of Typhoons Tina and Winnie in 1997 and the location of the moored buoy in the East China Sea.
A CO2 measuring system mounted on a 10m-diameter moored buoy in the East China Sea (28°10´N, 126°20´E; depth 136m) successfully observed significant variations in pCO2sea on a time scale of several days to a week [83]. Sudden variations in pCO2sea are observed after the passage of Typhoons Tina and Winnie in 1997. Tina passed over the moored buoy on 7 August 1997, while Winnie passed 300km east of the buoy on 17 August (Figure 13). The moored buoy was located on the right side of Winnie’s track (Figure 13). During the passage of Tina, the buoy observed a maximum wind speed of 29.5ms-1 (Figure 14) and maximum wave height of 10.9m. Short-period variations of sea temperature at 50m depth were not seen after the passage of Tina for nearly 24 hours. When observed sea-level pressure fell, air temperature and SST gradually decreased and the CO2 flux suddenly increased. Even though the sudden variation of CO2 flux ceased after the passage of Tina, 29°C-normalized pCO2 continued to be high. We next calculate the air-sea fluxes of CO2 using the gas-transfer coefficient based on a difference between pCO2sea and pCO2air, and the wind speed observed every three hours by the buoy [84].
(
F = E (W ) ⋅ pCO 2
sea
− pCO 2
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
air
)
(51)
Tropical Cyclone-Ocean Interaction: Numerical Studies
27
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Here, F indicates the air-sea fluxes of CO2, E(W) indicates the gas-transfer coefficient, and W is the wind speed.
Figure 14. Time series of sea-level pressure (hPa), wind speed (ms-1), sea temperature (°C) at 0m, 50m, and 100m depths; surface air temperature (°C), pCO2air, pCO2sea, and normalized pCO2 (μatm) at a constant temperature of 29°C; the difference in pCO2 (μatm) between the atmosphere and the ocean; and CO2 flux (mmol m-2 per 3h) at the moored buoy from 5 to 10 August 1998. [From Nemoto, Midorikawa et al., 2009]
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
28
Akiyoshi Wada
Figure 15. Same as Figure 14 except for 15 to 20 August in 1998. [From Nemoto, Midorikawa et al., 2009]
During the passage of Winnie, the buoy observed a maximum wind speed of 30.0ms-1 (Figure 15) and a maximum wave height of 11.1m. Both observations are quite similar to those during the passage of Tina. However, the central pressure of nearly 980hPa for Winnie is higher than that of nearly 960hPa for Tina (Figure 14). The decrease in SST is relatively small compared with the decrease in Figure 14. Nevertheless, the CO2 flux suddenly increases during the passage of Winnie. Sea temperature at 50m depth is almost equal to SST after the
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
29
passage of Winnie and sea temperature at 100m depth begins to increase during and after the passage of Winnie, indicating the mixed layer deepens due to the passage of Winnie.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 16. Time series of observed sea temperatures at 0m, 50m, and 100m depths and those simulated by MRI.COM at 0.5m, 50m, and 100m depths. [Courtesy of Dr. Midorikawa].
We perform numerical simulations using the MRI.COM [37] in order to investigate the upper-ocean thermal and physical structure associated with the sudden variations of oceanic pCO2. The integration time is two months with nudging daily sea temperature and salinity data on 3 August 1997. The daily sea temperature and salinity data are calculated from the MOVE system [45]. After the calculation by the North-Pacific version of MRI.COM, we performed another numerical simulation for 21 days without nudging daily sea temperature and salinity data using the same North-Pacific version of MRI.COM. A regional version of MRI.COM was then used to simulate the oceanic responses to Tina and Winnie. The domain of the regional version of MRI.COM is 10°N to 50°N, and 120°E to 160°E. The horizontal resolution is 0.25°, and there are 54 vertical layers. Atmospheric forcings are obtained from the NCEP R2 dataset [42]. The Rankine vortex presumed by using the best-track data is embedded in the wind-stress field. Solar radiation is varied diurnally based on the formulas [85]. When precipitation occurs, the solar radiation input is assumed to decrease by 10%. The above-mentioned procedures for numerical simulations of Tina and Winnie are almost the same as the procedures described in section 2-3. Figure 16 plots a time series of observed and simulated temperature at the location of the moored buoy at 0m, 50m, and 100m depths. Decreases in SST (sea temperature at 0m) induced by Tina and Winnie are successfully simulated. Even though short-period variations of sea temperature at 50m depth are poorly simulated, the trends of sea temperature at 50m agree with the observations. Negative biases in sea temperature are found at 100m depth partly because of poor topography and horizontal resolution in MRI.COM.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
30
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 17. Time series of a vertical profile of sea temperature from 3 to 23 August 1998 simulated by MRI.COM.
Figure 17 presents a time series of sea temperature from the surface to 100m depth simulated by MRI.COM. Ekman pumping is dominant on 8 August when Tina passes around the moored buoy. The effect of Ekman pumping on simulated sea temperature corresponds to a decrease in observed sea temperature at 50m depth and a negative bias of simulated sea temperature at 100m depth. In contrast, a mixed-layer deepening is remarkable on 18 August when Winnie approaches the moored buoy and passes on its left side. The mixed-layer deepening is consistent with the observation in Figure 15. In general, the seasonal thermocline is lowered due to mixed-layer deepening. The lowered seasonal thermocline then results in downward transport of warm water. Therefore, a negative bias of simulated sea temperature at 100m depth may be caused by insufficient vertical turbulent mixing at the mixed-layer base. Figure 18 presents latitude-depth sections of sea temperature and current components along 126.25°E. Tina induces Ekman pumping on 7 August around 28°N where the moored buoy is located. In addition, topography-induced upwelling occurs together with Ekman pumping. Both upwelling processes play dominant roles in decreasing SST and the sudden variation of CO2 flux. However, Ekman pumping occurs around 25°N on 18 August where Winnie passes over and the topography-induced upwelling is relatively weak. In contrast, the mixed-layer deepening is dominant around the moored buoy. Even though the SSC formation process differs between Tina and Winnie, both TCs exhibit similar sudden variations of CO2 flux. This suggests that the sudden variation of CO2 flux is independent of the location relative to the center of a TC and its translation speed. Is the oceanic biochemical response to TCs independent of the location relative to the center of a TC and its translation speed, which are closely related to the dynamics and thermodynamics of the oceanic response to a TC? Here we present other examples associated with the oceanic biochemical response to TCs. In general, nutrients are exhausted in the subtropical western
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
31
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
North Pacific due to upper-ocean stratification, resulting in low chl-a and primary production. To investigate a sudden variation of chl-a in the subtropical western North Pacific, we performed a numerical simulation associated with the oceanic response to Typhoon Ketsana in 2003. The procedure of numerical simulation by MRI.COM is almost the same as that of the numerical simulations of Rex, Tina and Winnie.
Figure 18. Vertical sections of sea temperature and horizontal current components across 126.25°E at (a) 1500 UTC on 7 August and (b) 0600 UTC on 18 August 1998. Typhoon symbols indicate the besttrack position of Rex, and the triangle, the location of the moored buoy at each integration time.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
32
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 19 displays horizontal distributions of simulated SST, daily SST obtained from TRMM/TMI and Aqua/AMSR-E microwave optimally-interpolated SST data, a latitudedepth section of simulated sea temperature and current components (meridional and vertical components), and a horizontal distribution of chl-a obtained from a MODIS and SeaWiFS merged dataset produced by NASA GSFC). SSC along the track of Ketsana is successfully simulated (Figure 19a) in comparison with the satellite observation (Figure 19b). Ekman pumping is remarkable at 0000 UTC on 25 October in 2003 (Figure 19c) around 17°N, 130.5°E, corresponding to the area where the chl-a is high and is distributed like a patch (Figure 17d). The high chl-a area is affected by Ketsana’s weaving track, corresponding to the recurvature phase (Figure 19d). This indicates that strong upwelling is induced around the recurvature area, resulting in the patch-like pattern of high chl-a. The sudden decrease in SST caused by Ketsana could recover earlier due to the formation of upper-ocean stratification by the input of strong short-wave radiation even though a patch-like pattern of high chl-a concentration remained for a longer time. It should be noted that an increase in chl-a is not seen where SSC occurs along the Ketsana’s track after the recurvature. This indicates that an increase in chl-a depends on the location relative to the center of a TC and its translation speed.
Figure 19. (a) Horizontal distributions of SSH (cm: contours) and SST (°C: shading) on 25 October 2003 simulated by MRI. (b) Horizontal distributions of TRMM/AMSR-E daily SST on 25 October 2003. (c) Latitude-depth section of sea temperature and horizontal current component across 130.5°E on 25 October 2003. (d) Horizontal distributions of chlorophyll-a concentration on 29 October 2003 obtained from MODIS and SeaWiFS merged dataset. [Courtesy of Dr. Kawai]
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
33
5. Idealized TC-Like Vortex and SSC 5.1. Idealized Numerical Experiments The following sections focus on the impact of TC-induced SSC on TC intensity, its intensification, and structural changes. The current understanding of the TC responses to TCinduced SSC is that SSC leads to the suppression of TC intensification, resulting in relatively weak intensity and structural changes around the core of TCs [17-21]. The dynamics associated with forming the inner core of a TC is represented by mesovortices [86,87], filamentation [88,89], vortex Rossby waves [90,91], and so on. These processes have been considered to be important for understanding TC intensification and subsequently improving TC intensity predictions independent of TC-induced SSC. Here we address the effects of TCinduced SSC on the mesovortices. The effects of TC-induced SSC on the filamentation and vortex Rossby waves have already been investigated [51]. To achieve the above, we perform idealized numerical experiments using a nonhydrostatic atmosphere model coupled with the mixed-layer ocean model [51] described in section 2.1. In the present numerical experiments, the mixed-layer ocean model consists of three layers and four levels. The uppermost layer represents a mixed layer where density is vertically uniform, the middle layer represents a thermocline where the vertical temperature gradient is the greatest of the three layers, and the bottom layer is assumed to be undisturbed by entrainment. Hereafter, we use the abbreviation “NHM” for an atmosphere-ocean noncoupled version and “NCM” for an atmosphere-ocean coupled version. It should be noted that the diurnally varying SST scheme described in section 2.2 is not used in the present numerical experiments.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
5.2. Model and Experiment Design The NHM and the atmospheric part of NCM have 301 x 301 horizontal grids with a horizontal grid spacing of 2km, 40 vertical levels with variable intervals from 40m at the lowermost layer near the surface to 1180m at the uppermost layer, and a top height of nearly 23km. The Coriolis parameter is assumed to be 5.0 x 10-5 s-1. Cumulus parameterization is not used in the present numerical experiments because the horizontal resolution of 2km is relatively fine. The horizontal resolution of the mixed-layer ocean model is 2km, the same as NHM and the atmospheric part of NCM. The model topography is assumed to be 1000m. The atmosphere-ocean coupling procedure includes short-wave and long-wave radiation, sensible and latent heat fluxes, zonal and meridional wind stresses, and accumulated precipitation during the time step of the ocean model. The atmosphere model provides atmospheric forcings to the ocean model. Topography information is also provided to the ocean model at the initial time for adjusting the land-sea distributions between the atmosphere and ocean models. The ocean model then provides the SST it calculates to the atmosphere model. The time step in NHM and the atmospheric part of NCM is 7.5 seconds. The interval of the exchange between the atmosphere and ocean models is 45 seconds, corresponding to the time step of the ocean model. The integration time is 81h in NHM and NCM. The integration in NCM begins 27h after the integration in NHM is finished.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
34
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 20. Initial vertical atmospheric profiles of potential temperature (θ, black solid line), equivalent potential temperature (θe, gray solid line), and saturated equivalent potential temperature (θe*, dashed line). [From Wada, 2009]
Figure 21. Initial vertical profile of wind across the vortex’s center. Dashed lines indicate the wind speed. Contour intervals are 2 or 3ms-1. Arrow feathers indicate wind speed and direction. Upward feathers indicate northerly and downward feathers southerly wind. [From Wada, 2009]
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
35
The boundary conditions are fixed and provided every six hours during the integration. Initial and boundary conditions for temperature and relative humidity in NHM and the atmospheric part in NCM are based on a homogeneous profile averaged over a 600km square domain collocated at the center of Typhoon Namtheun at 0000 UTC on 26 July 2004, which is calculated from the Japan Meteorological Agency regional analysis data. The date corresponds to the early intensification phase of Namtheun. Figure 20 plots the initial profiles of potential temperature ( θ = T ( p0 pi )R c ), equivalent potential temperature (θe = θ exp(Lqs c pT ) ), p
and saturated equivalent potential temperature ( θ e* = θ exp(Lqs (T ) c pT ) ), where T indicates the
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
temperature, pi is the pressure, p0 is the reference pressure, R is the gas constant, cp is the specific heat at constant pressure, L is the latent heat of vaporization of water, and qs or qs(T) is the saturated specific humidity (for a certain temperature T). The initial field of wind stress is assumed to be an idealized TC-like vortex (Figure 21). The maximum wind speed is set to 20ms-1, and the radius of maximum wind velocity is 80km. The initial SST is assumed to be homogeneous and ranges from 28°C to 34°C with a 1°C interval (Table 1). Hereafter, the series of numerical experiments are named “EN” (numerical experiment by NHM) and “EC” (numerical experiment by NCM). For example, “EN28” indicates a numerical experiment by NHM with an initial SST of 28°C, while “EC30” indicates a numerical experiment by NCM with an initial SST of 28°C. The initial ocean-temperature profile is determined as follows. The temperature at the mixed-layer base is one degree lower than the given initial SST, the temperature at the thermocline base is twelve degrees lower than the initial SST, the temperature at the bottom is set to be homogeneous at 5°C, the initial salinity is assumed to be 35 at all levels, the initial mixed-layer depth is set to be 30m, the initial thermocline depth is 170m, and the initial thickness in the third layer is 800m. Table 1. Abbreviations of Numerical Experiments, Values of Initial SST, and Coupled (NCM) or Noncoupled (NHM) Ocean [From Wada, 2009]. Experi ment EN28 EN29 EN30 EN31 EN32 EN33 EN34 EC28 EC29 EC30 EC31 EC32 EC33 EC34
Initial SST (°C) 28 29 30 31 32 33 34 28 29 30 31 32 33 34
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
NHM/ NCM NHM NHM NHM NHM NHM NHM NHM NCM NCM NCM NCM NCM NCM NCM
36
Akiyoshi Wada
5.3. Effect of Central Pressure Evolution
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 22a depicts the evolution of central pressures simulated by the NHM in EN28, EN30, EN32, and EN34. It clearly indicates that a higher initial SST causes earlier development of the vortex and more rapid intensification during a shorter period. We define rapid intensification as a decrease in central pressures of more than 6hPa in three hours (48hPa in a day). The lowest central pressure is 866.8hPa at 75h in EN34. The largest difference in central pressures between EN28 and EN34 exceeds 80hPa at 36h, corresponding to an early development phase in EN28 and a mature phase in EN34.
Figure 22. Evolution of (a) central pressure and (b) the radius of maximum wind speed in EN28, EN30, EN32, and EN34. Solid line indicates an initial SST of 28°C; thick-dashed line, that of 30°C; dashed line, that of 32°C; and thick solid line, that of 34°C. [From Wada, 2009].
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Tropical Cyclone-Ocean Interaction: Numerical Studies
37
In contrast, there is no clear difference in the radius of maximum wind speed between EN28, EN30, EN32, and EN34, particularly from 42h to 63h (Figure 22b). When the central pressures remain nearly 1000hPa from 0h to 18h, the variations of the radius of maximum wind speed are large and periodic, implying that the location of the central pressure is oscillatory, not stationary. The oscillatory feature may be similar to the behavior in the preconditioning phase of TCs [92]. When the central pressure falls rapidly, the radius of maximum wind speed becomes small but still varies periodically. Once the vortex reaches the mature phase, the central pressure and the radius of maximum wind speed rarely change but rather retain their own lowest values. It should be noted that the radius of maximum wind speed begins to significantly expand after 63h during the mature phase when the initial SST is 34°C. Figure 23a depicts the evolution of central pressures calculated by NCM in EC28, EC30, EC32, and EC34. It indicates that higher in situ SST around a TC causes earlier development of vortices and more rapid intensification, which is the same as the evolution of central pressures in EN. We regard “in situ SST around a TC” as the SST where the central pressure is the lowest. The in situ SST around a TC is affected by vertical turbulent mixing and Ekman pumping induced by the TC-like vortex. It should be noted that the effect of Ekman pumping on SSC is relatively small in the present numerical experiment due to relatively small number of vertical layers and the features of initial stratification. The lowest central pressure calculated by NCM is 888.5hPa at 80h in EC34. The difference in central pressures between EC28 and EC34 exceeds 80hPa at 43h, which is seven hours later than the largest differences in central pressures between EN28 and EN34. The difference in central pressures between EN28 and EN34 exceeds the difference between EC28 and EC34 from 27h and 38h and from 57h to 81h. The effect of SSC on the evolution of central pressures begins to appear significant at the initiation of the rapid intensification phase (27h) and rarely changes during the mature phase (at 57h). In contrast, the difference in central pressures between EN28 and EN34 becomes smaller than that between EC28 and EC34 from 39h to 56h when the vortex reaches a mature phase in EN, while the vortex undergoes rapid intensification in EC. These results imply that vortex-induced SSC enables the vortex to suppress its acceleration of intensification. The sensitivity of in situ SST to the radius of maximum wind speed is high from 27h to 36h in EC, particularly when the initial SST is relatively low (Figure 23b). When the central pressure falls rapidly, the radius of maximum wind speed begins to vary periodically while decreasing in size. The shrinking is also calculated in EN. In the latter part of the numerical integration, the radius of maximum wind speed rarely changes and remains at its lowest value in EC28, EC30, and EC32, while the radius of maximum wind speed increases in EC34 after 72h. The smallest radius remains at nearly 8km during the mature phase in both EN and EC. When the initial SST is 34°C, the radius of maximum wind speed increases in EN34 after 63h (Figure 22b) and in EC34 after 72h (Figure 23b). In situ SST, not SSC, may influence the increase of the radius of maximum wind speed during the mature phase because the increased radius of maximum wind speed is clearly seen in EC34 but only slightly seen in EN32.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
38
Akiyoshi Wada
Figure 23. Same as Figure 22, except in EC28, EC30, EC32, and EC34. [From Wada, 2009]
Figure 24a indicates the relationship between initial SSTs and central pressures in EN2834 from 27h to 81h. The relationship can be roughly divided into two phases: the rapid intensification phase and the mature phase. The period of rapid intensification is shorter when in situ SST is higher, indicating a large decrease in central pressure for three hours given high initial SST. During the mature phase, the central pressure is roughly approximated by a linear function of in situ SST. The linear relation is similar to an empirical formula for maximum potential intensity derived from the relationship between the minimum central pressure and climatological SSTs in the western North Pacific [93] and differs from the climatological relationship [11]. Figure 24b illustrates the relationship between initial SSTs and central pressures in EC28-34 from 27h to 81h. The relationship can be roughly divided into three phases: the rapid intensification phase, the moderate intensification phase, and the mature phase. Unlike the relationship in Figure 24a, the relationship in Figure 24b is similar to the climatological relationship [11]. Hereafter, the moderate intensification phase is referred to as the transition phase presented in Figure 24b. The transition phase is due to the effect of SSC
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
39
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
caused by the TC-like vortex on the central pressure and reveals the significant suppression of TC intensification due to SSC.
Figure 24. Relationship between in situ SSTs and central pressures in EN (a) and in EC (b). The horizontal axis indicates SST (°C), and the vertical axis indicates central pressure (hPa). Solid circles represent the intensification phase, and dashed circles, the mature phase. [From Wada, 2009]
5.4. Effect of Structural Change This section addresses the impacts of initial SST and SSC induced by a TC-like vortex on its structural change. First, we focus on the evolution of atmospheric boundary-layer height, the warm-core potential temperature anomaly, and its height within a radius of 80km from the vortex’s center. Atmospheric boundary-layer height, warm-core potential temperature anomaly, and its height are calculated every hour from 27h to 81h in both EN and EC. The atmospheric boundary-layer height is defined as a domain-averaged height at which a difference in potential temperature is 5K from the lowermost atmosphere level. The warmcore potential temperature anomaly is defined as the maximum difference between the computed grid value and horizontal mean value for each atmosphere level. The warm-core height is defined as the height of the highest warm-core potential temperature anomaly.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
40
Akiyoshi Wada
Figure 25. Evolution of (a) atmospheric boundary layer height (m), (b) warm-core height (m), and (c) warm-core potential temperature anomaly (°C). The horizontal axis indicates the integration time. Results are indicated for EN (left) and EC (right). [From Wada, 2009]
Figure 25a depicts a time series of the average atmospheric boundary-layer height together with its standard deviation for EN28 to EN34 (Figure 25a, left) and EC28 to EC34 (Figure 25a, right). The standard deviations represent ranges of atmospheric boundary-layer height for initial SSTs from 28°C to 34°C. The average atmospheric boundary-layer height in EN tends to be low with periodic variations from 27h to 60h. In contrast, the periodic
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
41
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
variation of the atmospheric boundary-layer height is not clear in EC. During the mature phase, the average atmospheric boundary-layer height in EN remains constant with slight variations, particularly from 69h to 81h, while the average atmospheric boundary-layer height continues to decrease in EC after 48h. Figure 25b depicts the time series of the warm-core height together with its standard deviation for EN28 to EN34 (Figure 25b, left) and EC28 to EC34 (Figure 25b, right). The relatively large standard deviations seen in the earlier integration are caused by differences in initial SSTs and the differences in the vortex’s phases between high and low SSTs. The warm-core potential temperature anomalies increase significantly in both EN (Figure 25c, left) and EC (Figure 25c, right) when the atmospheric boundary-layer heights decrease during the intensification phase. This indicates that warming around the vortex’s center leads to lowering the atmospheric boundary-layer heights during the intensification phase. The rate of increase in the warm-core potential temperature anomalies in EN is higher than the rate in EC for the same SST. Figure 25c reveals that SSC can produce a difference in the warm-core potential temperature anomalies between EN and EC during the intensification phase. The standard deviations of warm-core height in EC are relatively small after 47h (Figure 25b). During the mature phase, the difference in warm-core potential temperature anomalies between EN and EC remains constant. The increasing rates of warm-core potential temperature anomalies in EC are almost the same as those in EN after 47h (Figure 25c). Therefore, the impact of SSC on warm-core potential temperature anomalies is not significant during the mature phase. Potential vorticity (PV) is introduced in order to investigate the structural change of a TC-like vortex near the surface and to clarify the impact of SSC and initial SST on the structural change during the intensification phase. PV is proportional to the product of absolute vorticity and potential temperature gradient. PV is formulated as PV = − g (ς θ + f )
where
∂θ , ∂p
(52)
ς θ is the relative vorticity on an isentropic surface of θ, f is the local planetary
vorticity, and p is the vertical pressure coordinate. Figure 26 plots the horizontal distributions of PV at 20m height in EN30 (Figures 26a-c) and EN32 (Figures 26d-f). In EN30, three small 10km-scale mesovortices with high PV (above 40PVU) are seen at each vertex of a triangular pattern at 27h (Figure 26a). The initial vortex-scale cyclonic circulation includes these mesovortices, and the circulation and the mesovortices intensify as they move toward the vortex’s center (Figure 26b). The triangular pattern then changes into a comma-like pattern at 30h (Figure 26b). The comma-like high-PV pattern becomes a circular pattern at 33h (Figure 26c). However, the circular high-PV pattern does not form a completely circular PV ring. In EN32, an elliptical pattern is seen at 27h (Figure 26d), indicating that rapid intensification has already started. The highest PV in EN32 exceeds that in EN30 at 27h (Figure 26a). The wave number 1 elliptical high PV pattern then changes into a wave number 2 pattern at 30h (Figure 26e), and the amplitude of PV increases around the mesovortices. At 33h, the elliptical pattern changes into a complete circular pattern. The complete circular pattern indicates that a PV ring has been established (Figure 26f). The PV ring in EN32 is more robust than that in EN30 (Figure 26c).
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
42
Akiyoshi Wada
Figure 26. Horizontal distributions of PV and horizontal wind at 20m height (a) at 27h in EN30, (b) at 30h in EN30, (c) at 33h in EN30, (d) at 27h in EN32, (e) at 30h in EN32, and (f) at 33h in EN32. Contour intervals are 20PVU. [From Wada, 2009]
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Tropical Cyclone-Ocean Interaction: Numerical Studies
43
Figure 27. Same as Figure 26 except for (a) at 30h in EC30, (b) at 33h in EC30, (c) at 30h in EC32, and (d) at 33h in EC32. [From Wada, 2009]
Figure 27 depicts the horizontal distributions of PV at 20m height at 30h and 33h in EC30 and EC32. In EC30, two small 10km-scale mesovortices with high PV (above 60PVU) are notable at each vertex of a triangular pattern at 30h (Figure 27a). The pattern is similar to that at 27h in EN30 but differs from the comma-like high PV pattern at 30h in EN30 (Figure 26b). A comma-like high-PV pattern is seen at 33h (Figure 27b). The pattern clearly differs from the complete circular pattern in Figure 26c. In EC30, The PV area exceeding 20PVU is larger than that in EN30 at 33h, implying that inward angular momentum transport becomes weak due to SSC induced by a TC-like vortex. In EC32, a circular pattern of high PV, not an elliptical pattern, has already formed at 30h (Figure 27c), indicating a remarkable wave number 1 pattern. The amplitude of the radial gradient of PV in EC32 becomes smaller than that in EN32 (Figure 26e). The amplitude of high PV increases from 30h to 33h (Figure 27d). Nevertheless, the amplitude of PV and its
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
44
Akiyoshi Wada
radial gradient are smaller than those in EN32 (Figure 26f). These results suggest that SSC causes both a decrease in the amplitude of PV and a reduced radial gradient of PV. When the initial SST is high, the radius of maximum wind speed increases significantly, particularly in EN34 and EC34 (Figures 22b and 23b). We introduce the vortex Rossby number here for discussing the relationship between the radius of maximum wind speed and in situ SST in EN32, EN34, EC32, and EC34. The vortex Rossby number Ro is defined as
Ro =
V fR
(53)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
where R indicates the radius of maximum wind speed and V represents the maximum wind speed at 20m height.
Figure 28. Time series of the vortex Rossby number in EN32 (solid line), EC32 (dashed line), EN34 (thick solid line) and EC34 (thick dashed line). The horizontal axis indicates integration time (hours), and the vertical axis indicates the vortex Rossby number defined in Eq. (53). [From Wada, 2009]
Figure 28 depicts the time series of the vortex Rossby number. The vortex Rossby number increases during the intensification phase, remains almost constant during the transition phase, and decreases during the mature phase. The vortex behaves chaotically during the intensification phase. During the transition phase, the variations become smaller than those during the intensification phase, indicating that chaotic behavior remains. The vortex then becomes more systematic during the mature phase. When the initial SST is high, the radius of maximum wind speed increases during the mature phase without maintaining a high vortex Rossby number. An increase in the radius of maximum wind speed leads to a reduction in the vortex Rossby number when the maximum wind speed rarely changes. An increase in the radius of maximum wind speed is independent of the SSC induced by a TC-like vortex but dependent on in situ SST. In other words, the maximum wind speed is not always enhanced when in situ SST is high for maintaining a low vortex Rossby number.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
45
6. Numerical Simulation and Oceanic Conditions 6.1. Mature Phase of Typhoon Namtheun (2004)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
The idealized numerical experiments suggest that SSC and initial SST play their own roles in determining TC intensity, its intensification, and its structural change. However, it is not clear whether or not environmental oceanic preexisting conditions affect these. In fact, environmental oceanic preexisting conditions vary on intraseasonal, interannual, and decadal scales. Apart from the long-term oceanic variations, there is a problem in oceanic reanalysis datasets in that the quality depends on the temporal-spatial frequency of observations. There are still insufficient numbers of observations, even though nearly three-thousand Argo floats have been deployed in the world’s oceans. First, we demonstrate the impact of the frequency of oceanic observations such as in situ observations by voluntary ships, research vessels, mooring and drifting buoys, floats, and by observations of satellite altimeters on the predictions of Typhoon Namtheun in 2004 during the mature phase using both the NHM and the NCM.
Figure 29. Left panel: Domain used in the numerical prediction of Typhoon Namtheun in 2004. Right panel: The procedure of numerical predictions.
Figure 29 illustrates the computational domain in the outer nest of NHM and NCM with a horizontal grid spacing of 6km, horizontal grids of 511 x 511. The number of vertical levels is 40, which is the same as numerical experiments described in section 5. We use two computational domains here, with two-way doubly nested functions in NHM and NCM. The domain in the inner nest with a horizontal grid-spacing of 2km and horizontal grids of 511 x 511 is reallocated to cover the overall cyclonic circulation of Namtheun. The initial time of numerical simulations is 0000 UTC on 29 July 2004. Initial and boundary conditions are provided from the results calculated by the GSM and TYM (Figure 29). The boundary conditions are provided every three hours. Grell’s cumulus parameterization [94] is used instead of the Kain-Fritsch parameterization [61] except for the numerical simulation by the NHM and NCM with a horizontal grid-spacing of 2km (no cumulus parameterization is used). The diurnally varying SST scheme described in section 2.2 is not used in the present
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
46
Akiyoshi Wada
numerical simulations. The integration time of NHM and NCM is 39h. First, the numerical simulations are performed by NHM only for 30 hours. After 30h, numerical simulations by the NHM and NCM with a horizontal grid spacing of 2km are simultaneously performed in addition to the simulations with a horizontal grid-spacing of 6km. The time step is 15 seconds in NHM and NCM with a horizontal grid spacing of 6km and 7.5 seconds with a horizontal grid spacing of 2km. We prepare the following three types of oceanic reanalysis datasets for TC intensity predictions by NHM and NCM using the North-Pacific version of MOVE [45] to investigate the sensitivity of the frequency of oceanic observations on oceanic preexisting conditions and TC predictions.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
A. No assimilation of in situ observations and SSH data observed by two satellite altimeters, JASON1 and ENVISAT, B. No assimilation of in situ observations but assimilation of SSH data, C. Assimilation of both in situ observations and SSH data.
Figure 30. Locations of in situ observations in July 2004. Circles represent the locations in early July, squares those in middle July, and triangles those in late July. [Courtesy of Dr. Usui]
Figure 30 plots the locations of in situ observations in July 2004. Some fixed-line observations are implemented by research vessels and cruise liners. There are relatively few Argo floats. Horizontal distributions of the frequencies of in situ observations, JASON1 and ENVISAT satellite altimeters with a horizontal grid spacing of 5° clearly reveal a difference in coverage of data between them (Figure 31). The difference in the coverage of data affects the oceanic temperature field calculated by the North-Pacific version of MOVE (Figure 32). The 29°C isotherm hardly extends to the east on 29 July 2004 when neither in situ observations nor SSH data are assimilated (Figure 32a).
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Tropical Cyclone-Ocean Interaction: Numerical Studies
47
Figure 31. Horizontal distributions of the number of in situ oceanic observations (a), SSH observed by the ENVSAT satellite altimeters (b), and SSH observed by JASON1 satellite altimeters (c) in late July with a grid spacing of 5°. Typhoon symbols indicate the track of Typhoon Namutheun (2004). [Courtesy of Dr. Usui].
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
48
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 32. Horizontal distribution of SST (Shading and contours) on 29 July 2004 and predicted TC positions (typhoon symbols) every six hours. (a) Case A, (b) Case B, and (c) Case C.
Figure 33. Best-track and predicted positions of Typhoon Namtheun in 2004. Diamonds indicate the best track; circles, the result of Case A; triangles, the result of Case B; and crosses, the result of case C.
The 29°C isotherm extends to around 136°E when SSH data are assimilated (Figure 32b) and extends to around 139°E when both in situ observations and SSH data are assimilated (Figure 32c). The horizontal distribution in case C successfully reproduces the Kuroshio meandering and a cold eddy off Kii peninsula, revealing that both in situ observations and SSH data are required for precisely reproducing oceanic fields. The impact of the frequency of oceanic observations is also seen in the Japan Sea or East Sea where the oceanic preconditions are quite different among the three assimilation results. We perform numerical predictions using NHM, NCM and the three different oceanic preconditions. Figure 33 plots the best-track positions and tracks predicted by the NCM with the three different oceanic preexisting conditions. There is no clear difference in track predictions. All predicted tracks are aligned west-northwestward, which is consistent with the best-track positions. This suggests that a difference in oceanic preexisting conditions rarely affects the track predictions for Namtheun during the mature phase.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
49
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 34 plots the time series of best-track central pressure and central pressures predicted by NHM (Figure 34a) and by NCM (Figure 34b) with three different oceanic preexisting conditions. Even though the best-track central pressure indicates that Namtheun enters the mature phase, central pressures predicted by NHM indicate that simulated Namtheun enters the intensification phase during the integration (Figure 34a). In contrast, the intensification is suppressed due to SSC induced by Namtheun in spite of a difference in initial oceanic preconditions. The difference in predicted central pressure among cases A, B and C is ~5hPa. The result indicates that SSC plays an essential role in determining TC intensity, while oceanic preexisting conditions play a minor role during the mature phase. Figure 35 presents the three-dimensional structure of predicted Namtheun at 36h for case C. SSC appears right behind Namtheun. A vortex has already been established and is accompanied by spiral bands outside the vortex. We can see some features of a threedimensional structure in the predicted Namtheun such as strong meso-scale convections accompanied by spiral bands and significant outflow in the upper troposphere.
Figure 34. Time series of minimum central pressure (a) in the noncoupled model, and (b) in the coupled model. Diamonds indicate the best-track central pressure; circles, the result of Case A; triangles, the result of Case B; and crosses, the result of case C.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
50
Akiyoshi Wada
Figure 35. Three-dimensional picture of Typhoon Namtheun in 2004. Cloud water content is displayed by volume rendering, and SST and ground temperature are also presented by shading.
6.2. Intensification Phase of Typhoon Hai-Tang (2005)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
6.2.1. Reanalysis and Observation of Hai-Tang The result of numerical prediction for the mature phase of Typhoon Namtheun in 2004 suggested that the impact of oceanic preconditions on TC intensity was ~5hPa. Here we address the impact of oceanic preexisting conditions on TC intensity in the intensification phase of Typhoon Hai-Tang in 2005. A previous study reported that a pre-existing cyclone flow represented by negative SSHA played a crucial role in enhancing SSC induced by HaiTang [95]. Figure 36a depicts the horizontal distribution of ten-day mean SSH and maximum SSC from 1 to 10 July 2005. Maximum SSC is defined here as the maximum decrease in SST from the first day (1 July 2005) during the period of 1 to 10 July 2005. Figure 36b presents a distribution similar to that in Figure 36a but for an analyzed period of 11 to 20 July 2005; the maximum decrease in SST is calculated from 11 July 2005 during the period of 11 to 20 July 2005. High-resolution satellite altimeter data can capture relatively low SSH in certain areas around 23ºN, 144ºE (C1); 22.5°N, 135°E (C2); 20°N, 130°E (C3); and 21°N, 125°E (C4) [95], which is reproduced in the oceanic reanalysis data [96]. Around the stationary low SSH areas, low SSH had been input as an oceanic preexisting condition. In Figure 36b, significant SSC is captured in certain areas around 25°N, 140°E and 25°N, 125°E. The oceanic reanalysis data underestimate the Hai-Tang-induced SSC in comparison with high-resolution satellite data due to the relatively rough (0.5° x 0.5°) horizontal resolution of oceanic reanalysis data and wind forcing for resolving in situ SSC induced by a TC, [95]. Nevertheless, comparing Figures 36a and 36b reveals that each SSC event reproduced in the oceanic reanalysis data is caused by the passage of Hai-Tang.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Tropical Cyclone-Ocean Interaction: Numerical Studies
51
Figure 36. Horizontal distributions of mean SSH (contours: cm) and SSC (shading: °C) derived from MOVE data. Contour intervals are 10cm. The derivation of the amount of SSC is described in the literature from 1 to 10 July 2005 (a) and from 11 to 20 July 2005 (b). Stars indicate the locations of Argo temperature and salinity profiles before and after the passage of Typhoon Hai-Tang in 2005. [From Wada, Sato et al., 2009]
Next, the ocean response to Hai-Tang is investigated on the right of Hai-Tang’s track (22.9ºN, 135.5ºE and 22.8ºN, 135.0ºE), behind the typhoon (20.0ºN, 132.2ºE and 20.1ºN, 131.5ºE) and on the left of Hai-Tang’s track (17.1ºN, 132.0ºE and 17.0ºN, 132.0ºE) using Argo temperature and salinity profiles obtained before (14 or 15 July 2005) and after the passage (24 or 25 July 2005) (Figure 37). Sea temperature and salinity decrease, and the mixed layer deepens on the northern side of Hai-Tang (Figure 37a). In contrast, salinity increases along the track of Hai-Tang when a decrease in sea temperature and mixed-layer deepening occur simultaneously (Figure 37b). Sea temperature begins to increase and salinity begins to decrease around the mixed-layer base along the track of Hai-Tang. A decrease in sea temperature and an increase in salinity are found below the mixed-layer base, probably resulting from Ekman pumping. On the southern side of Hai-Tang, the changes in sea temperature and mixed-layer depth are similar to those in Figure 37a. However, seatemperature decrease and salinity increase below the mixed-layer base indicate Ekman pumping. This Ekman pumping is probably caused by Typhoon Banyan during the intensification phase.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
52
Akiyoshi Wada
Figure 37. Vertical profiles of temperature and salinity from the surface to 200m depth. Dashed lines indicate the profiles observed before the passage of Hai-Tang, and solid lines indicate those observed after the passage. [Reproduced from Wada, Sato et al., 2009].
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
53
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
6.2.2. Ekman Pumping and Oceanic Preexisting Conditions In situ observations suggest Ekman-pumping occurrence along Hai-Tang’s track. As described before, TC-induced SSC is known to be mainly caused by Ekman pumping and vertical turbulent mixing [4, 5, 27, 28]. The impact of a difference in oceanic preexisting conditions (for example, mixed-layer depth and temperature gradient in the thermocline) on SSC is thought to be relatively small compared with Ekman pumping and vertical turbulent mixing [5]. Although the impact of oceanic preexisting conditions on SSC underlying HaiTang is remarkable [95], SSC caused by the passage of Hai-Tang may be influenced by Ekman pumping to some extent because cyclonic wind stress results in a divergent flow at the sea surface, which enables the water beneath a TC to transport outside and enables the seasonal thermocline to rise [96]. We investigated the impact of a difference in oceanic preexisting conditions on the ocean response to Hai-Tang using MRI.COM in order to demonstrate the above-mentioned TC-induced dynamics during the passage of Hai-Tang [37]. The procedure for running MRI.COM is the same as described before. We use daily oceanic reanalysis data of sea temperature and salinity on 11 August 2005 calculated by MOVE [45] for nudging the reanalysis data with the calculation value by MRI.COM. The subsequent run is performed by MRI.COM for 11 days without nudging the reanalysis data. After two pre-runs, the regional version of MRI.COM is run for 11 days. Atmospheric forcings are obtained from the NCEP R2 dataset [42]. The Rankine vortex presumed by using the best-track data is embedded in the wind-stress field. Solar radiation is varied diurnally based on given formulas [85]. When the precipitation occurs, the solar radiation input is reduced by 10%. The above-mentioned procedures for numerical simulations of Hai-Tang are almost the same as the procedures described in sections 2-3 and 4. In addition, hourly GsMAP data (http://www.radar.aero.osakafu-u.ac.jp/~gsmap/pdf/ gsmap_mwr_j_web.pdf) are used for estimating hourly precipitation data. When there is no precipitation data, the precipitation is expressed as a function of wind speed [97]: P (mm per hour) =0.0016 x (v – 25) where P indicates precipitation and v wind speed.
Figure 38. Continued on next page. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
54
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 38. Horizontal distributions of SST (shading) and SSH (contours) at 120h, corresponding to 1800 UTC on 16 July 2004 when (a) initial oceanic preconditions in 2005 are used as the initial oceanic condition, and (b) initial oceanic preconditions in 1999 are used.
Figure 38 presents horizontal distributions of simulated SST and SSH at 120h. Around 21°N to 22°N, 126°E where Hai-Tang passes, SSH is low in 2005 (Figure 38a) but relatively high in 1999 (Figure 38b). The difference in SSH between 1999 and 2005 is independent of the ocean response to Hai-Tang at 120h. Indeed, Hai-Tang is positioned around 20°N, 130°E at that time. Figure 39 illustrates latitude-depth sections of sea temperature from the surface to 100m depth at 120h. In 2005, the seasonal thermocline is relatively shallow around 21°N to 22°N (Figure 39a), while the seasonal thermocline is relatively deep in 1999 (Figure 39b).
Figure 39. Continued on next page. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
55
Figure 39. Latitude-depth sections of sea temperature across 126°E at 120h, corresponding to 1800 UTC on 16 July 2004 when (a) initial oceanic conditions in 2005 are used as the initial oceanic condition, and (b) initial oceanic conditions in 1999 are used.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 40 illustrates latitude-depth sections of sea temperature at 196h. Remarkable Ekman pumping occurs at 21°N and 22°N in 2005, resulting in large SSC after the passage of HaiTang (Figure 40a). However, SSC is relatively weak in 1999 (Figure 40b). Therefore, oceanic preexisting conditions can affect the formation of SSC through the differences in the effects of Ekman pumping occurring behind a TC. In that sense, Ekman pumping cannot be neglected in SSC formation.
Figure 40. Continued on next page. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
56
Akiyoshi Wada
Figure 40. Same as Figure 39 except at 192h, corresponding to 2200 UTC on 19 July 2004.
6.2.3. Impact on TC Intensification
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
We perform numerical simulations using NHM and NCM to investigate the impact of oceanic conditions on TC intensity prediction. It should be noted that “oceanic conditions” include oceanic preexisting conditions and oceanic short-term variations. The oceanic preexisting conditions include the distribution of warm core eddies or cold wakes; oceanic variations include SSC and diurnally varying SST. Table 2. Abbreviations of Numerical Predictions, Years of Initial SST, and Coupled or Noncoupled Ocean Model and SG Scheme Prediction SG05 NO05 NH05 SG99 NO99 NH05
Year 2005 2005 2005 1999 1999 1999
Ocean Model and SG Scheme OCEAN +SG OCEAN NONE OCEAN+SG OCEAN NONE
Both NHM and NCM have 721 x 421 horizontal grids with a horizontal grid spacing of 6km, 40 vertical levels with variable intervals from 40m at the lowermost layer near the surface to 1180m at the uppermost layer, and a top height of nearly 23km. The vertical-level specification is the same as that in the cast of Typhoon Namtheun in 2004. We use the diurnally varying SST scheme here for precisely simulating the diurnally varying SST described in section 2.2. The initial integration time is at 1200 UTC on 12 July 2005, corresponding to an early intensification phase. Table 2 lists the numerical predictions. The abbreviation ‘SG’ indicates a prediction for reproducing a large amplitude of diurnally
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
57
varying SST using the SG scheme; the abbreviation ‘NO’ indicates that for reproducing a small amplitude. We also use the oceanic reanalysis data on 12 July 1999 for investigating the effect of oceanic preexisting conditions on Hai-Tang’s prediction. In 1999, the La Niña event is mature, which differs from 2005 when the central Pacific warming event is terminated.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 41. Time series of TCHP (kJ cm-2), Z26 (m), SSH (cm), central pressure (hPa). TCHP, Z26, and SSH calculated using oceanic reanalysis data.
Figure 41 illustrates the time series of tropical cyclone heat potential (TCHP) [11, 98], the depth of the 26ºC isotherm (Z26), SSH, and best-track central pressure (hPa). TCHP, Z26, and SSH are estimated from daily oceanic reanalysis data provided by the North-Pacific version of the MOVE system [45]. Hai-Tang intensified rapidly when it passed over the high TCHP area with deep Z26 and high SSH, while no decrease in best-track central pressure was seen when TCHP was low with thin Z26 and low SSH (Figure 41). The oceanic preexisting conditions thus play a significant role in the intensification of Hai-Tang. If the ocean response to Hai-Tang is influenced by the oceanic preexisting conditions, TC intensity may be related to the oceanic preconditions through a difference in the amount of SSC induced by Hai-Tang. Figure 42 presents horizontal distributions of SST and mixed-layer depth at the initial integration time with best-track and predicted positions in SG05, NO05, SG99, and NO99, and the time series of predicted and best-track central pressures. We can see northwestward errors relative to Hai-Tang’s best track for all predicted tracks early in the integration. The northwestward shift is probably caused by a bogus typhoon used as an initial atmospheric condition of TYM [3, 48]. A relatively strong axisymmetric bogus typhoon may result in an excessive beta drift for track predictions. Due to the northwestward error of predicted tracks, the predicted Hai-Tang path encounters oceanic conditions different from those along the best track, even though a difference in oceanic preexisting conditions rarely affects the track prediction. A difference in predicted central pressures between oceanic preexisting conditions in 2005 and 1999 (Figure 42) begins to appear significantly at 18h when the predicted Hai-Tang is located where the horizontal gradient of SST in 2005 is steeper north of Hai-Tang’s position than the horizontal
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
58
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
gradient in 1999. The positioning results in less intensification of the predicted Hai-Tang in SG05 and NO05 due to enhanced SSC induced by the predicted Hai-Tang. Indeed, the besttrack Hai-Tang rapidly intensifies from 12h to 36h when it passes over warm-core eddies (Figure 41). In contrast, SSC induced by the predicted Hai-Tang becomes excessive compared with satellite observations [95].
Figure 42. Horizontal distributions of SST (contours) and mixed-layer depth (shading) in 2005 (upper panel) and 1999 (middle panel). Squares indicate the best track of Hai-Tang; circles, that in SG05 (upper panel) and SG99 (middle panel); and triangles, that in NO05 (upper panel) and NO99 (middle panel). The lower panel displays time series of central pressures in SG05, NO05, NH05, SG99, NO99, NH99, and the best track.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
59
Diurnally varying SST begins to significantly impact the predicted central pressure after 42h, later than the impact of oceanic preexisting conditions (Figure 41). The maximum difference in central pressures between SG05 and NO05 is ~5hPa when the oceanic preexisting conditions in 1999 are used. In addition, the difference is not significant in the oceanic preexisting conditions in 2005. Therefore, SSC induced by Hai-Tang is the most important factor for precisely predicting TC intensity because its impact on TC intensification is the most remarkable. This result differs from the impact on TC intensity prediction in the mature phase of Typhoon Namtheun in 2004. Oceanic preexisting conditions impact TC intensity prediction more significantly than the diurnally varying SST, even though the impact is relatively small compared with the impact of SSC induced by Hai-Tang.
7. Atmosphere-Wave-Ocean Coupled Model
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
7.1. Background According to a simple irreversible Carnot cycle theory [20-22], TC intensity depends on the transfer of momentum and enthalpy between the oceanic and atmospheric boundary layers. The parameterization of the momentum and enthalpy fluxes involves microscale physical processes at the interface between the atmosphere and the ocean, including spray production and advection, the characteristics of the interfacial sublayers, and the surface sea state, including wave breaking and dissipation [62]. Among the physical processes, ocean waves are crucial for understanding TC-ocean interaction because they may affect both airsea momentum and enthalpy transfers through the variations of roughness length. Recent studies indicated that drag coefficients level off at very high wind speed (30 to 40ms-1) [99, 100]. Although there are various physical explanations for the behavior of drag coefficients, the previous parameterizations associated with estimating the drag coefficient clearly overestimate drag at high wind speeds [101]. Momentum fluxes are influenced by surface gravity waves in high wind conditions. Total momentum flux from the atmosphere can be divided into two or three parts: the flux into the surface waves and the fluxes into the surface currents from surface flux and directly from wind. The flux into the surface waves is evaluated by integrating the product of the wave growth rate and the momentum density of surface waves over all the wavenumbers and directions. The flux into the surface currents from surface flux is evaluated by integrating the product of the wave dissipation rate and the momentum density of surface waves over all wave frequencies and directions. Since surface waves play a significant role in determining the momentum flux, the total momentum flux depends highly on the sea state. That is why a wave prediction model must be coupled with an atmosphere-ocean coupled model. Indeed, a surface gravity wave field is not uniquely determined by local wind forcing. The impacts of sea state on TC intensity and its structural change have been well investigated [8, 101-103]. Here we numerically predict Typhoon Hai-Tang in 2005 using an NHM-wave-ocean coupled model preliminarily developed by the author.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
60
Akiyoshi Wada
7.2. Wave Model A third-generation wave model developed at the Japan Meteorological Agency/Meteorological Research Institute (MRI) is used as a component of the NHM-waveocean coupled model. The third-generation MRI ocean wave model (MRI-III) calculates the wave spectra F(f,θw) as a function of space and time from an energy balance equation, ∂F ( f , θ w ) + C g ⋅ ∇F ( f ,θ w ) = S in + S nl + S ds , ∂t
(54)
where f is the wave frequency, θw is the wave direction, Sin is the spectral energy input by the wind, Snl is the nonlinear transfer of spectral energy due to wave-wave interactions, and Sds is the dissipation due to wave-breaking and white-cap formation. The energy input Sin is expressed as S in = A + BF ( f , θ w ) ,
(55)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
where A indicates the linear wave growth [104] and BF(f,θ) is determined from an empirical formula [105-106]. MRI-III adopts the extended discrete-interaction-approximation scheme, a modified version of the original scheme [108], in calculating Snl [107]. Energy-dissipation term Sds is formulated based on the laboratory experiments and dimensional analysis [109]. The MRI-III specifications incorporated into the NHM-wave-ocean coupled model are as follows. The wave spectrum consists of 900 components, 25 in frequency and 36 in direction. The wave spectrum is divided logarithmically from 0.0375Hz to 0.3000Hz. We assume that the roughness length depends on the wave steepness [110] for calculating the roughness length in the ocean.
Figure 43. Schematic diagram of NHM-wave-ocean coupled model.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
61
Figure 43 presents a schematic diagram associated with exchange processes among NHM, MRI-III, and the mixed-layer ocean model. The exchange procedures between NHM and the mixed-layer ocean model are the same as those of the NCM described in section 5.2. The exchange procedures between NHM and MRI-III are as follows. The wind speed calculated by NHM is provided to MRI-III; the wave height and roughness length calculated by MRI-III are provided to NHM. The exchange procedures between NHM and MRI-III are as follows. The current velocity at the mixed layer calculated by the mixed-layer ocean model is provided to MRI-III; the wave-induced stress calculated by MRI-III is provided to NHM. The time interval for exchanging the components of MRI-III with the components of the NCM is 20 minutes. The horizontal resolution of MRI-III is the same as that in NHM and NCM.
7.3. Impact of the Sea State
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
To investigate the impact of the sea state on TC intensity prediction, numerical predictions are performed using NHM, NCM, and the NHM-wave-ocean coupled model. The experiment design is the same as that described in section 6.2.3. In addition to the numerical predictions by NHM (Atmos) and NCM (Ocean), numerical predictions are performed by the NHM-wave-ocean coupled model (Wave) under two oceanic conditions: oceanic preexisting conditions on 12 July 2005 and those on 12 July 1999 (Table 3). Figure 44 plots the best-track positions of Hai-Tang and Hai-Tang’s predicted positions in Wave under the two oceanic preconditions. Even though the wave model is incorporated into the NCM, the ocean wave cannot improve the northwestward bias of the predicted tracks presented in Figure 42. Thus, the impact of the sea state on TC track prediction is negligibly small.
Figure 44. Best track and predicted tracks of Typhoon Hai-Tang in 2005 produced by the NHM-waveocean coupled model. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
62
Akiyoshi Wada Table 3. Abbreviations of Numerical Predictions, Years of Initial SST, Horizontal Resolution and Model Combination. KF indicates the inclusion of cumulus parameterization. SG indicates the inclusion of the diurnally varying SST scheme
Prediction Atmos_05 Ocean_05 Wave_05 Atmos_99 Ocean_99 Wave_99 Atmos3km_2005 Ocean3km_2005 Wave3km_2005
Year 2005 2005 2005 1999 1999 1999 2005 2005 2005
Horizontal Resolution 6km 6km 6km 6km 6km 6km 3km 3km 3km
Model Combination NHM(KF) NHM(KF)+OCEAN(SG) NHM(KF)+OCEAN(SG)+WAVE NHM(KF) NHM(KF)+OCEAN(SG) NHM(KF)+OCEAN(SG)+WAVE NHM NHM+OCEAN(SG) NHM+OCEAN(SG)+WAVE
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 45 depicts the time series of predicted central pressures in Atmos, Ocean, and Wave under two oceanic preexisting conditions. The ocean wave plays a role in TC intensification when the oceanic preexisting conditions in 2005 are used for TC simulations, but plays a role in suppressing TC intensification when the oceanic preexisting conditions in 1999 are used. The numerical prediction result reveals that the impact of the sea state on TC intensity prediction can not be straightforwardly understood. The impact of the ocean wave on TC intensity prediction is almost the same as the impact of a difference in oceanic preexisting conditions and is larger than the impact of diurnally varying SST.
Figure 45. Time series of central pressures in Atmos05, Atmos99, Ocean05, Ocean99, Wave05, Wave99, and the best track.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
63
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 46. Same as Figure 45 except for Atmos3km_2005, Ocean3km_2005, Wave3km_2005, and the best track.
Figure 46 is the same as Figure 45 except that it presents the results for NHM, NCM, and the NHM-wave-ocean coupled model with a horizontal grid spacing of 3km and without cumulus parameterization. As described in the introduction, a horizontal grid spacing of 1 to 2km is desirable for resolving the inner core of a TC [1]. Finer horizontal resolution is required for realistically simulating the inner core of a TC and its intensity. However, predicted central pressures calculated by NHM, NCM, and the NHM-wave-ocean coupled model with a horizontal grid spacing of 3km and without a cumulus parameterization are weaker than predicted central pressures calculated by NHM, NCM, and the NHM-waveocean coupled model with a horizontal grid spacing of 6km and with cumulus parameterization [61] when the oceanic preexisting conditions in 2005 are used. This reveals that a high-resolution model cannot always reproduce TC intensity realistically without cumulus parameterization. In other words, cumulus convection can be linked with the upper ocean thermal energy through the intensification of Hai-Tang.
8. Conclusion This chapter addresses tropical cyclone (TC)-ocean interaction, focusing on the physics, chemical and biological responses to a TC, and the impact of oceanic short-term variations and environments on TC intensity and its structural change on a weather-forecasting time scale from a numerical point of view. The main conclusions are as follows. The dynamics and thermodynamics of the oceanic response to a TC depend on the phases of the TC, defined as a trend of central pressure and the TC’s translation speed. The physical oceanic response to a TC also affects the chemical and biological responses to a TC. A sudden increase in the air-sea fluxes of CO2 is independent of the location relative to a TC track, while a patch-like pattern of high chlorophyll-a concentration is induced by Ekman pumping along a TC track. Sea-surface cooling becomes remarkable when cool water
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
64
Akiyoshi Wada
transported from the oceanic interior by Ekman pumping is efficiently entrained by shear instability. At the same time, water with high chlorophyll-a concentration around the seasonal thermocline is mechanically mixed within a mixed layer. It should be noted that the process does not include phytoplankton growth stimulated by nutrient enrichment of the euphotic layer, resulting in the increase in chlorophyll-a concentration three to seven days after the passage of TCs. SSC can suppress the intensification of a TC during the intensification phase. The intensification phase is characterized as the production of mesovortices within a cyclonic circulation of a TC-scale vortex. High initial sea-surface temperature (SST) accelerates the intensification process. In addition, high initial SST increases the radius of maximum wind speed during the mature phase. Oceanic preexisting conditions are essential for TC intensity prediction during the intensification phase in Typhoon Hai-Tang in 2005, but not during the mature phase in Typhoon Namtheun in 2004. The impact of SSC on TC intensity is the most remarkable and the impacts of differences in oceanic preexisting conditions and sea state follow the impact of SSC. The impact of diurnally varying SST on TC intensity predictions is the least. SSC leads to negative feedback for TC intensification, while the impact of the sea state can not be straightforwardly understood. This chapter describes the role of ocean thermal energy in TC intensification, the structural change of a TC, and the variations of ocean thermal energy caused by a TC. The most important process in TC-ocean interaction is controlled by multi-scale eddies seen in the atmosphere and the ocean. In order to utilize the ocean thermal energy more efficiently, we should understand the process of the interaction between eddies in the atmosphere and the ocean.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
Chen, SS; Price, JF; Zhao, W; Donelan, MA; Walsh, E. J. Bull. Am. Meteorol. Soc. 2007, 88, 311-317. Bender, MA; Ginis, I. Mon. Wea. Rev., 2000, 127, 917-946. Wada, A. Pap. Meteorol. Geophys., 2007, 58, 103-126. Price, JF. J. Phys. Oceanogr., 1981, 11, 153-175. Wada, A. Pap. Meteorol. Geophys., 2002, 52, 31-66. Shay, LK; Goni, GJ; Black, PG. Mon. Wea. Rev., 2000, 128, 1366-1383. Goni, GJ; Trinanes, JA. EOS Trans. AGU, 2003, 84, 573, 577-578. Bao, JW; Wilczak, JM; Choi, JK; Kantha, LH. Mon. Wea. Rev., 2000, 128, 2190-2210. Hong, W; Chang, SW; Raman, S; Shay, LK; Hodur, R. Mon. Wea. Rev., 2000, 128, 1347-1365. Lin, II; Wu CC; Emanuel, KA. ; Lee IH. ; Wu, CR; Pun IF. Mon Wea Rev., 2005, 133, 2635-2649. Wada, A; Usui, N. J. Oceanogr., 2007, 63, 427-447. Leipper, DF. J. Atmos. Sci., 1967, 24, 182-196. Sakaida, F; Kamamura, H; Toba, Y. J. Geophys. Res., 1998, 103, 1053-1065. Gierach, MM; Subrahmanyam, B. J. Geophys. Res., 2008, 113, C04029.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Tropical Cyclone-Ocean Interaction: Numerical Studies
65
[15] Babin, SM; Carton, JA; Dickey, TD; Wiggert, JD. J. Geophys. Res., 2004, 109, C03043. [16] Price, JF; Morzel, J; Niller, PP. J. Geophys. Res., 2008, 113, C07010. [17] Mao, Q; Chang, SW; Pfeffer, RL. Mon. Wea. Rev., 2000, 128, 4058-4070. [18] Chan, JCL; Duan, Y; Shay, LK. J. Atmos. Sci., 2001, 58, 154-172. [19] Zhu, H; Ulrich, W; Smith, RK. J. Atmos. Sci., 2004, 61, 1245-1258. [20] Wu, L; Wang, B; Braun, SA. Mon. Wea. Rev., 2005, 133, 3299-3314. [21] Wu, CC; Lee, CY; Lin II. J. Atmos. Sci., 2007, 64, 3562-3578. [22] Emanuel, KA. J. Atmos. Sci., 1986, 43, 1763-1775. [23] Emanuel, KA. J. Atmos. Sci., 1995, 52, 3969-3976. [24] Bister, M; Emannuel, KA. Meteorol. Atmos. Phys., 1998, 65, 233-240. [25] Bister, M; Emannuel, KA. J. Geophys. Res., 2002, 107(D24), 4801. [26] Bender, MA; Ginis, I; Kurihara, Y. J. Geophys. Res., 1993, 98, 23245-23263. [27] Wada. A; J. Oceanogr, 2005, 61, 41-57. [28] Wada, A; Niino, H; Nakano, H. J. Oceanogr., 2009, 65, 373-396. [29] Kraus, EB; Turner, JS. Tellus., 1967, 19, 98-105. [30] Gill, AE. Atmosphere-Ocean Dynamics. International Geophysics Series 1982 Vol. 30, Academic Press, San Diego, CA, 662pp. [31] Ginis, I; Richardson, RA; Rothstein, LM. J. Phys. Oceanogr., 1998, 126, 1054-1079. [32] Deardorff, JW. J. Phys. Oceanogr., 1983, 13, 988-1002. [33] Soloviev, A; Kukas, R. The near-surface layer of the ocean: Structure, dynamics and application. Atmospheric and Oceanographic Science Library 2006 Vol 31. Springer, Dordrecht, 572pp. [34] Kawai, Y; Wada, A J. Oceanogr, 2007, 63, 721-744. [35] Schiller, A; Godfrey, JS. J. Geophys.Res., 2005, 110, C11014. [36] Ohlmann, JC; Siegel, DA. J. Phys. Oceanogr., 2000, 30, 1849–1865. [37] Ishikawa, I; Tsujino, H; Hirabara, M; Nakano, H; Yasuda, T; Ishizaki, H. Meteorological Research Institute Community Ocean Model (MRI.COM) manual. Technical reports of the Meteorological Research Institute 2005, 47, 189pp. (in Japanese) [38] Noh, Y; Kim, HJ. J. Geophys. Res., 1999, 104, 15621-15634. [39] Smith, RD; Sandwell, DT. Science, 1997, 277, 1956-1962. [40] Griffies, SM; Hallberg, RW. Mon Wea Rev., 2000, 128, 2935-2946. [41] Redi, MH. J. Phys. Oceanogr., 1982, 12, 1154-1158. [42] Gent, PR; McWilliams, JC. J. Phys. Oceanogr., 1990, 20, 150-155. [43] Tsujino, H; Hasumi, H; Suginohara, N. J. Phys. Oceanogr., 2000, 30, 2853-2865. [44] Paulson, CA; Simpson, JJ. J. Phys. Oceanogr., 1977, 7, 952-956. [45] Usui, N; Ishizaki, S; Fujii, Y; Tsujino, H; Yasuda, T; Kamachi, M. Adv. Space Res., 2006, 37, 806-822. [46] Kanamitsu, M; Ebisuzaki, W; Woolen, J; Yang, SK; Hnilo, JJ; Fiorino, M; Potter, GL. Bull. Am. Meteorol. Soc., 2002, 83, 1631-1643. [47] Staniforth, A. In Numerical methods in atmospheric and oceanic modelling. The André J. Robert memorial volume; Lin, C. A; Ed; Canadian Meteorological and Oceanographic Society / NRC Research Press, Montreal 1997, 25-54.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
66
Akiyoshi Wada
[48] Japan Meteorological Agency Outline of the operational numerical weather prediction at the Japan Meteorological Agency. Appendix to WMO numerical weather prediction progress report. 2002, 158. [49] Japan Meteorological Agency Outline of the operational numerical weather prediction at the Japan Meteorological Agency. Appendix to WMO numerical weather prediction progress report. 2007, 194. [50] Mashiko, W; Muroi, C. CAS/JSC WGNE Res. Activ. Atmos. Oceanic. Modell., 2003, 33. 0522-0523. [51] Wada, A. J. Geophys. Res., 2009, 114, D18111. [52] Ikawa, M; Saito, K. Description of a hon-hydrostatic model developed at the Forecast Research Department of the MRI. Technical reports of the Meteorological Research Institute, 1991, 28, 238. [53] Saito, K; Kato, T; Eito, H; Muroi, C. Documentation of the Meteorological Research Institute/ Numerical Prediction Division unified nonhydrostatic model. Technical reports of the Meteorological Research Institute 2001, 42, 133. [54] Saito, K; Coauthors Mon. Wea. Rev, 2006, 134, 1266-1298. [55] Saito, K; Ishida, J; Aranami, K; Hara, T; Segawa, T; Narita, M; Honda, Y. J. Meteorol. Soc Japan, 2007, 85B, 271-304. [56] Lin, YH; Farley, RD; Orville, HD. J. Clim. Appl. Meteorol., 1983, 22, 1065-1092. [57] Kondo, J. Boundary Layer Meteorol., 1975, 9, 91-112. [58] Klemp, JB; Wilhelmson, R. J. Atmos. Sci., 1978, 35, 1070-1096. [59] Deardorff, JW. Boundary Layer Meteorol., 1980, 18, 495-527. [60] Sugi, M; Coauthors Geophys. Mag., 1990, 43, 105-130. [61] Kain, JS; Fritsch, JM. J. Atmos. Sci., 1990, 47, 2784-2802. [62] Ginis, I. In Atmosphere-Ocean Interactions Volume 1; Perrie, W; Ed; International Series on Advances in Fluid Mechanics; WIT Press, Southampton, UK, 2002, pp 83114. [63] Monaldo, FM; Sikora, TD; Babin, SM; Sterner, RE. Mon. Wea. Rev., 1997, 125, 27162721. [64] Geisler, JE. Geophys. Fluid. Dyn., 1970, 1, 249-272. [65] Elsberry, RL; Fraim, T; Trapnell, R. J. Geophys. Res. 1976, 81, 1153-1162. [66] Chang, SW; Anthes, RA. J. Phys. Oceangr., 1978, 8, 468-480. [67] Price, JF. J. Phys. Oceanogr., 1981, 11, 153-175. [68] Ginis, I. In Global Perspective on tropical cyclones; RL; Elsberry, Ed; 1995, 693, 198260. [69] Wright, R. Tellus, 1969, 21, 409-413. [70] Ramage, CS. J. Appl. Meteor., 1974, 13, 739-751. [71] Zedler, SE; Dickey, TD; Doney, SC; Price, JF; Yu, X. Mellor, GL. J. Geophys. Res., 2002, 107(C12), 3232. [72] Hong, CH; Yoon, JH. J. Geophys. Res., 2003, 108, 3282. [73] Jacob, SD; Shay, LK. J. Phys. Oceanogr., 2003, 33, 649-676. [74] Jacob, SD; Koblinsky, CJ. Mon Wea. Rev., 2007, 135, 2207-2225. [75] Prasad, TG; Hogan, PJ. J. Geophys. Res., 2007, 112, C04013. [76] D’Asaro, EA. J. Phys. Oceanogr., 2003, 33, 561-579. [77] D’Asaro, EA; Sanford, TB; Niiler, PP; Terrill, EJ. Geophys. Res. Lett., 2007, 34, L15609.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Numerical Studies
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
[78] [79] [80] [81] [82]
67
Sanford, TB; Price, JF; Girton, JB; Webb, DC. Geophys. Res. Lett., 2007, 34, L13604. Shay, LK; Mariano, AJ; Jacob, SD; Ryan, ED. J. Phys. Oceanogr., 1998, 28, 858-889. Bates, NR; Knap, AH; Michaels, AF. Nature, 1998, 395, 58-61. Perrie, W; Zhang, W; Ren, X; Long, Z. Geophys. Res. Lett., 2004, 31, L09300. Perrie, W; Zhang, W; Ren, X; Long, Z; Hare, J. In Atmosphere-Ocean Interactions Volume 2; Perrie, W; Ed; International Series on Advances in Fluid Mechanics; WIT Press, Southampton, UK, 2006, 39, 143-151. [83] Nemoto, K; Midorikawa, T; Wada, A; Ogawa, K; Takatani, S; Kimoto, H; Ishii, M; Inoue, H. Y. Deep Sea Res., Part II 2009, 56, 542-553. [84] Wanninkhof, R. J. Geophys. Res., 1992, 97, 7373-7382. [85] Danabasoglu, G; Large, WG; Tribbia, JJ; Gent, PR; Briegleb, BP; McWilliams, JC. J. Climate, 2006, 19, 2347-2365. [86] Kossin, JP; McNoldy, BD; Schubert, WH. Mon. Wea. Rev., 2002, 130, 3144-3149. [87] Montgomery, MT; Vladimirov, VA; Denissenko, PV. J. Fluid. Mech., 2002, 471, 1-32. [88] Rozoff, CM; Schubert, WH; McNoldy, BD; Kossin, JP. J. Atmos. Sci., 2006, 63, 325340. [89] Wang, Y. J. Atmos. Sci., 2008, 65, 1158-1181. [90] Guinn, TA; Schubert, WH. J. Atmos. Sci., 1993, 50, 3380-3403. [91] Montgomery, MT; Kallenbach, RJQ. J. R. Meteorol. Soc., 1997, 123, 435-465. [92] Hendricks, EA; Montgomery, MT; Davis, CA. J. Atmos. Sci., 2004, 61, 1209-1232. [93] Baik, JJ; Paek, JS. J. Meteor. Soc. Japan, 1998, 76, 129-137. [94] Grell, GA. Mon. Wea. Rev., 1993, 121, 764-787. [95] Zheng, ZW; Ho, CR; Kuo, NJ. Geophys. Res. Lett., 2009, 35, L20603. [96] Wada, A; Sato, K; Usui, N; Kawai, Y. Geophys. Res. Lett., 2009, 36, L09603. [97] Jacob, SD; Koblinsky, CJ. Mon. Wea. Rev., 2007, 135, 2207–2225. [98] Leipper, DF; Volgenau, D. J. Phys. Oceanogr., 1972, 2, 218-224. [99] Powell, MD; Vickery, PJ; Reinhold, T. A. Nature, 2003, 422, 279-283. [100] Donelan, MA; Haus, BK; Reul, N; Plant, WJ; Stiassnie, M; Graber, HC; Brown, OB; Saltzman, ES. Geophys. Res. Lett., 2004, 31, L18306. [101] Moon, IJ; Ginis, I; Hara, T; Tomas, B. Mon. Wea. Rev., 2007, 135, 2869-2878. [102] Doyle, JD. Mon. Wea. Rev., 2002, 130, 3087-3099. [103] Bao, JWl Michelson, SA; Wilczak, JM; Fairall, CW. In Atmosphere-Ocean Interactions; Perrie, W; Ed; 2002, 1, 115-153. [104] Cavaleri, L; Rizzoli, PM. J. Geophys. Res., 1981, 86, 10961-10973. [105] Plant, WJ. J. Geophys. Res., 1982, 87, 1961-1967. [106] Mitsuyasu. H; Honda, T. J. Fluid Mech., 1982, 123, 425-442. [107] Ueno, K; Kohno, N. In 8th International Workshop on Wave Hindcasting and Forecasting., 2004, G2, 1-7. [108] Hasselmann, S; Hasselmann, K; Allender, JH; Barnett, TP. J. Phys. Oceanogr., 1985, 15, 1378-1391. [109] Ueno, K. Sokkou-Jihou, 1998, 65, S181-S187. (in Japanese) [110] Taylor, PK; Yelland, MJ. J. Phys. Oceanogr., 2001, 31, 572-590.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
In: Advances in Energy Research, Volume 1 Editor: Morena J. Acosta, pp. 69-97
ISBN: 978-1-61668-994-0 © 2010 Nova Science Publishers, Inc.
Chapter 2
THE FUTURE OF ENERGY: THE GLOBAL CHALLENGE Mustafa Omer 17 Juniper Court, Forest Road West, Nottingham NG7 4EU, UK
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Abstract There are technologies under development today for carbon capture and storage, in order to create carbon dioxide (CO2) neutral emissions from fossil fuels, mainly coal. Such technologies may be realised within ten years from now, but these technologies will most probably suit very large combined heat and power (CHP) plants, since large investments are expected and plant efficiencies are likely to drop by approximately 10%. The global warming will eventually lead to substantial changes in the world’s climate, which will, in turn, have a major impact on human life and the environment. Cogeneration plants fuelled using waste gases provide an economic and environmentally friendly way of helping to satisfy the heat and power needs of industry or a community. This study has explored the use of waste fuels, explaining some of the main considerations necessary to ensure the cogeneration plant provides the required heat and power in a reliable and efficient manner. The renewable energy technologies (RETs) are particularly suited for the provision of rural and urban power supplies and a major advantage is that equipment such as flat plate solar driers, wind machines, etc. RETs can be constructed using local resources and without the advantage results from the feasibility of local maintenance and the general encouragement such local manufacture, gives to the build up of small-scale rural based industry. This chapter gives some examples of smallscale energy converters, nevertheless it should be noted that small conventional, i.e., engines are currently the major source of power in rural areas and will continue to be so for a long time to come. There is a need for some further development to suit local conditions, to minimise spares holdings, to maximise interchangeability both of engine parts and of the engine applications. Emphasis should be placed on full local manufacture. The adoption of green and/or sustainable approaches to the way in which society is run is seen as an important strategy in finding a solution to the energy problem. The key factors to reducing and controlling CO2, which is the major contributor to global warming, are the use of alternative approaches to energy generation and the exploration of how these alternatives are used today and may be used in the future as green energy sources. Even with modest assumptions about the availability of land, comprehensive fuel-wood farming programmes offer significant energy, economic and environmental benefits. These benefits would be dispersed in rural areas, where they are greatly needed and can serve as linkages for further rural economic
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
70
Mustafa Omer development. The nations as a whole would benefit from savings in foreign exchange, improved energy security, and socio-economic improvements. This chapter discusses a comprehensive review of renewable energy sources, environment and sustainable development. This includes all the renewable energy technologies, materials and their development, energy efficiency systems, energy conservation scenarios, energy savings and other mitigation measures necessary to reduce climate change.
Keywords: Renewable energy sources, technologies, sustainable development, environment.
1. Introduction In order to develop an electrification policy for any given country, it is firstly necessary to take a long-term view of the electrification target, which it is desired to achieve and then to put in place. In order not to arrive at an arbitrary separation of the two methods of electrification (decentralised and centralised), it is essential to adopt a global approach to such planning with the following objectives: •
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
•
To define an electrification target for the studied region, it consists in choosing between centralised electrification solutions (grid) and decentralised (collectives or individuals). To propose a time schedule for the electrification of villages in the region relying on criteria taking into account socio-economic, environmental budgetary and political aspects.
The field of renewable energy for village power has benefited from the great progress that has been made in technical integration and reliability of system components. Unfortunately, the world is littered with systems that have failed because no one had the responsibility, direct incentives and ability to perform maintenance and repairs. The present day energy arises has therefore resulted in the search for alternative energy resources in order to cope with the drastically changing energy picture of the world. Due to increasing fossil fuel prices, the research in RETs utilisation has picked up a considerable momentum in the world. The direct and indirect conversion of RETs for useful and efficient application and designing of solar thermal devices require an adequate knowledge of solar radiation available at a particular location. Renewable energy technologies (RETs) and its role for harnessing for the society are well known. A lot of research has been conducted around the globe to make the utilisation of RETs easy and simple through the development and demonstration activities. The use of renewable energy resources could play an important role in this context, especially with regard to responsible and sustainable development. It represents an excellent opportunity to offer a higher standard of living to local people and will save local and regional resources. Implementation of greenhouses offers a chance for maintenance and repair services. It is expected that the pace of implementation will increase and the quality of work improve in addition to building the capacity of the private and district staff in contracting procedures. The financial accountability is important and should be made transparent. There is strong scientific evidence that the average temperature of the earth’s surface is rising. This was a result of the increased concentration of carbon dioxide and other greenhouse gases in the atmosphere as released by burning fossil fuels. This global warming
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Future of Energy: The Global Challenge
71
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
will eventually lead to substantial changes in the world’s climate, which will, in turn, have a major impact on human life and the built environment. Therefore, effort has to be made to reduce fossil energy use and to promote green energies, particularly in the building sector. Energy use reductions can be achieved by minimising the energy demand, by rational energy use, by recovering heat and the use of more green energies. This study was a step towards achieving this goal. The adoption of green or sustainable approaches to the way in which society is run is seen as an important strategy in finding a solution to the energy problem. The key factors to reducing and controlling CO2, which is the major contributor to global warming, are the use of alternative approaches to energy generation and the exploration of how these alternatives are used today and may be used in the future as green energy sources. Even with modest assumptions about the availability of land, comprehensive fuel-wood farming programmes offer significant energy, economic and environmental benefits. These benefits would be dispersed in rural areas where they are greatly needed and can serve as linkages for further rural economic development. The nations as a whole would benefit from savings in foreign exchange, improved energy security, and socio-economic improvements. With a nine-fold increase in forest – plantation cover, the nation’s resource base would be greatly improved. The international community would benefit from pollution reduction, climate mitigation, and the increased trading opportunities that arise from new income sources. As environmental concerns grow and fossil fuel sources diminish, more and more research is being focused on cleaner and renewable systems. Wind, sun and biomass are now well established as sustainable energy sources, which are friendly to the environmental, causing little or no pollution as compared to fossil fuels. Energy efficiency brings health, productivity, safety, comfort and savings to homeowners, as well as local and global environmental benefits.
2. Renewable Energy Eventually renewable energies will dominate the world’s energy supply system. There is no real alternative. Humankind cannot indefinitely continue to base its life on the consumption of finite energy resources. Today, the world’s energy supply is largely based on fossil fuels and nuclear power. These sources of energy will not last forever and have proven to be contributors to our environmental problems. The environmental impacts of energy use are not new but they are increasingly well known; they range from deforestation to local and global pollution. In less than three centuries since the industrial revolution, humankind has already burned roughly half of the fossil fuels that accumulated under the earth’s surface over hundreds of millions of years. Nuclear power is also based on a limited resource (uranium) and the use of nuclear power creates such incalculable risk that nuclear power plants cannot be insured. Renewable sources of energy are an essential part of an overall strategy of sustainable development. They help reduce dependence of energy imports, thereby ensuring a sustainable supply. Furthermore, renewable energy sources can help improve the competitiveness of industries over the long run and have a positive impact on regional development and employment. Renewable energy technologies are suitable for off-grid services, serving those in remote areas of the world without requiring expensive and complicated grid infrastructure.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
72
Mustafa Omer
In 2007, the United States was out of its foreign oil dependence by renewable energy resources and reduces gas usage by a full 20% in ten years through alternative fuels. Extending hope and opportunity depends on a stable supply of energy that keeps America's economy running and America's environment clean. For too long the nation has been dependent on foreign oil and this dependence leaves them more vulnerable to hostile regimes and to terrorists who could cause huge disruptions of oil shipments and raise the price of oil and do great harm to the economy. It is in the vital interest to diversify America's energy supply - the way forward is through technology. They must continue changing the way America generates electric power by even greater use of clean coal technology, solar and wind energy and clean and safe nuclear power. To reach this goal, they must increase the supply of alternative fuels by setting mandatory fuel standards to require 35 billion gallons of renewable and alternative fuels in 2017 and that is nearly five times the current target. This chapter on investing in renewable technologies expands further on this theme and offers an in-depth analysis of all the renewable energies available today, from biofuels to geothermal. In addition, it also explores the benefits of each energy source, the growth drivers, challenges and barriers, economics of that energy and much more. A complete analysis of all the renewable energies in use today, along with the economic and environmental impact is also provided in this theme. Over the past several decades, there have been a number of different ways to define unconventional natural gas. Often, the distinction between conventional and unconventional gas resources has been made based on economics. Commonly, uneconomic or marginally economic resources such as tight (low permeability) sandstones, shale gas, and coalbed methane (CBM) are considered unconventional. However, due to continued research and favourable gas prices, many previously uneconomic or marginally economic gas resources are now economically viable and may not be considered unconventional by some companies. Unconventional gas resources are geologically distinct in that conventional gas resources are buoyancy-driven deposits, occurring as discrete accumulations in structural or stratigraphic traps, whereas unconventional gas resources are generally not buoyancy-driven deposits. They are regionally pervasive accumulations and most commonly independent of structural or stratigraphic traps. The unconventional natural gas category (CBM, gas shale’s, tight sands, and landfill) is expected to continue at double-digit growth levels in the near term. In 2008, demand for unconventional natural gas was increased at 10.7% from 2003, aided by prioritised research and development efforts. The disparity between projected increases for natural gas consumption in mature market economies and the much smaller increases expected for production in these markets points to an increasing world dependence on transitional and emerging market gas production. Natural gas from unconventional reservoirs is being targeted to contribute a greater share of the world's natural gas supplies in the next two decades. Independent producers are helping develop many of the new technologies and well-site strategies to ensure that as much unconventional gas as possible will be available when it is needed. Extracting more gas from unconventional resources will require significant improvements in exploration and production technology. New drilling technologies contributing to the efficiency of unconventional gas reservoir development and redevelopment include horizontal drilling, improvements to bits and better drill pipe. Although unconventional gas resources are abundant, they are generally more costly to
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Future of Energy: The Global Challenge
73
produce. Their exploitation was boosted in the late 1980s and early 1990s with the successful implementation of tax incentives designed to encourage their development. Since then, technological development has contributed to continued production growth, even in the absence of tax incentives (which generally are unavailable for production from wells drilled after 1992). Indeed, increasing production from unconventional gas resources has actually offset a decline in conventional gas production in recent years.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
2.1. Water Resources and Desalination The whole of the Middle East and North Africa (MENA) countries’ development has been concentrated in agriculture, mainly irrigated agriculture, which created job opportunities through less expensive investments. This averted the potential catastrophes of poverty and hunger and fostered domestic peace. However, less the result of the of the sharp increase in population and agriculture development, as well as the establishment of many small, mediumsized and even of heavy industries, the available water resources were insufficient to meet development aspirations, especially because the spectrum of water uses has widened and the intensity of water needs has increased. Population growth, higher standards of living, industrialisation, irrigation and other activities, accelerated the exhaustion of available resources. The present shortage in water resources and expected sharpening of demand should give rise to water policies involving more efficient conservation systems rather than the traditional search for new resources. The challenge facing us is to develop and introduce the necessary technologies for water and wastewater systems. Rethinking in management of the water sector has become very essential and radical changes towards a balanced resources/demand equation have become inevitable for a continual yield of water resources to guarantee future generating equity in these resources. Desalination, the only solution after exhausting reform remedies of water use sector is a costly process, prohibitive for irrigated purposes and most industries and can barely be accepted for municipal uses, except for coastal cities. Increasing water scarcity in the downstream areas of several river basins demands improved water management and conservation in upper reaches. Improved management is impossible without proper monitoring at various levels. It is well known that all existing sewage treatment plants are overloaded. Hence, the treated effluents do not comply with international effluent quality guidelines. The main reasons behind this are: • • • • • •
Weak management and absence of environmental awareness. Public-sector institutional problems. Failure in process design, construction and operation. Lack of skilled operating staff and insufficient monitoring programmes. Poor maintenance and weak financial resources. Low level of public involvement and lack of financial commitment.
In Some countries, a wide range of economic incentives and other measures are already helping to protect the environment. These include: (1) Taxes and user charges that reflect the costs of using the environment, e.g., pollution taxes and waste disposal charges. (2) Subsidies, credits and grants that encourage environmental protection. (3) Deposit-refund systems that
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
74
Mustafa Omer
prevent pollution on resource misuse and promote product reuse or recycling. (4) Financial enforcement incentives, e.g., fines for non-compliance with environmental regulations. (5) Tradable permits for activities that harm the environment. The principles of managing water resources, including groundwater, are summarised: • • • • •
•
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
•
All water in the water cycle, including groundwater, is treated as part of the common resource. Water use allocations are not permanent and are given for a reasonable period (maximum 40 years). Existing water users have to apply for registration of their water use within a set period. National government is the custodian of the nation’s water resources, including groundwater. All water uses, excluding reserves for basic human use and for ecological health, are subjected to a system of allocation that would promote equitable and sustainable development. For promoting efficient water use, the policy is to charge users for the full financial costs of providing access to water, including infrastructure development and catchments management activities. The riparian system of allocation, in which right to use water is tied to ownership of land along rivers, is effectively abolished.
The demand management exercise needs to begin immediately. As demand for water increases and new sources of water supply become expensive to develop, there is a need to use water more than once in the water cycle. In this context, the wastewater can be reused for irrigation, groundwater recharge, recreational and environmental uses, non-potable urban uses and potable reuse. Finally, an overall administrative coordination framework will be required to effect these policy changes. However, the great potential of these resources in terms of reducing poverty is not underscored in the policy circles as yet. It is an area that needs the urgent attention of policy planners lest the resource is endangered by unsustainable extraction. Growth in population, higher living standards and the rapid increase in industrialisation are exerting unprecedented water resources. Recent developments in technology have made seawater desalination more affordable than ever, leading to rapid growth in the implementation of this technology in coastal regions. The renewed interest in desalination is spurring technological advances to improve existing processes and to develop new and more efficient ones. Seawater desalination consumes high amounts of energy despite significant advances made in process design and material in recent years, but a surge in demand for freshwater in coastal regions is generating new approaches, technologies and optimised processes that reduce energy consumption. Climatic and environmental changes and a rising demand have increased the competition over water resources and have made cooperation between countries that share a transboundary river an important issue in water resources management and hydro-politics.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Future of Energy: The Global Challenge
75
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
2.2. Wind Energy Since early-recorded history, people have been harnessing the energy of the wind. Wind energy propelled boats along the Nile River as early as 5000 BC by 200 BC; simple windmills in China were pumping water, while vertical-axis windmills with woven reed sails were grinding grain in Persia and the Middle East. New ways of using the energy of the wind eventually spread around the world. By the 11th century, people in the Middle East were using windmills extensively for food production; returning merchants and crusaders carried this idea back to Europe. The Dutch refined the windmill and adapted it for draining lakes and marshes in the Rhine River Delta. When settlers took this technology to the New World in the late 19th century, they began using windmills to pump water for farms and ranches and later, to generate electricity for homes and industry. Wind power is the conversion of wind energy into useful form, such as electricity, using wind turbines. In windmills, wind energy is directly used to crush grain or to pump water. At the end of 2007, worldwide capacity of wind-powered generators was 94.1 Giga-Watts (GW). Although wind currently produces just over 1% of worldwide electricity use, it accounts for approximately 19% of electricity production in Denmark, 9% in Spain and Portugal; and 6% in Germany and the Republic of Ireland. Globally, wind power generation increased more than fivefold between 2000 and 2007. Wind is simple air in motion. It is caused by the uneven heating of the earth’s surface by the sun. Since the earth’s surface is made of very different types of land and water, it absorbs the sun’s heat at different rates. Today, wind energy is mainly used to generate electricity. Wind energy is also world's fastest growing energy source and is a clean and renewable energy source that has been in use for centuries in Europe and more recently in the United States and other nations. Wind turbines, both large and small, produce electricity for utilities and homeowners and remote villages. Wind energy is a clean energy source as electricity generated by wind turbines do not pollute the air or emit pollutants like other energy sources. This means less smog, less acid rain and fewer greenhouse gas emissions. Every 10,000 Mega Watt (MW) of wind installed can reduce CO2 emissions by approximately 33 MMT annually if it replaces coal-fired generating capacity, or 21 Million Metric Tons (MMT) if it replaces generation from average fuel mix. Many developing countries have little incentive to use wind energy technologies, to reduce their emissions despite the fact that the most rapid growth in CO2 emissions is in the developing world. Two related activities could give both developed and developing countries incentives to develop wind projects. The first is joint implementation, a programme under which firms from the developed countries can earn carbon offsets by building clean energy projects in the developing world. Developed nations should endorse and push for joint implementation to move from its current status to full-scale implementation. The second activity is the World Bank's Global Environmental Facility (GEF), which can cover the incremental cost of developing environmentally benign or beneficial projects in the developing world, such as building a wind project instead of an apparently cheaper coal project. This incentive is particularly important for countries such as China and India, which have tremendous power needs and must build energy capacity quickly at the lowest possible cost. Without going into details, the materials can be ranked in terms of decreasing cost, e.g., Titanium, Aluminum, Plastics (on average), Iron and Cement (Table 1).
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
76
Mustafa Omer
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
2.3. Biofuels Biofuel is any fuel that is derived from biomass - recently living organisms or their metabolic by-products, such as manure from cows. It is a renewable energy source, unlike other natural resources such as petroleum, coal, and nuclear fuels. Ethanol is manufactured from microbial conversion of biomass materials through fermentation. Ethanol contains 35% oxygen. The production process consists of conversion of biomass to fermentable sugars, fermentation of sugars to ethanol, and the separation and purification of the ethanol. Fermentation initially produces ethanol containing a substantial amount of water. Distillation removes the majority of water to yield about 95% purity ethanol, the balance being water. This mixture is called hydrous ethanol. If the remaining water is removed in a further process, the ethanol is called anhydrous ethanol and is suitable for blending into gasoline. Ethanol is “denatured” prior to leaving the plant to make it unfit for human consumption by addition of a small amount of products such as gasoline. Biodiesel fuels are oxygenated organic compounds - methyl or ethyl esters - derived from a variety of renewable energy sources such as vegetable oil, animal fat, and cooking oil. The oxygen contained in biodiesel makes it unstable and requires stabilisation to avoid storage problems. Rapeseed methyl ester (RME) diesel, derived from rapeseed oil, is the most common biodiesel fuel available in the Europe. In the United States, biodiesel from soybean oil, called soy methyl ester diesel, is the most common biodiesel. Collectively, these fuels are referred to as fatty acid methyl esters (FAME). Biofuels have become a growth industry with worldwide production more than doubling in the last five years. The rapid expansion of ethanol production in the United States and biodiesel production (and to a lesser extent, biogas) in Germany and other countries in the Western Europe has created a biofuels frenzy that has affected many countries, including Canada. Many measures have been used to stimulate production and consumption of biofuels, including preferential taxation, subsidies, import tariffs and consumption mandates. Recently, Canadian federal and provincial governments have announced consumption mandates and subsidies to assist rapid expansion of biofuels production in Canada. Canada has considerable natural resources and is one of the world’s largest producers and exporters of energy. In 2006, Canada produced 21.1 quadrillion British Thermal Units (Btu) of total energy, the fifth largest amount in the world. Since 1980, Canada’s total energy production has increased by 86%, while its total energy consumption has increased by only 48% during that period. Almost all of Canada’s energy exports go to the United States, making it the largest foreign source of the USA energy imports: Canada is consistently among the top sources for the USA oil imports, and it is the largest source of the USA natural gas and electricity imports. Recognising the importance of the energy trade between the two countries, both participate in the North American Energy Working Group, which seeks to improve energy integration and cooperation between Canada, the USA, and Mexico. In the European Union (EU), transport is responsible for an estimated 21% of all GHG emissions that are contributing to global warming and this percentage is rising. In order to meet sustainability goals, in particular the reduction of GHG emissions agreed under the Kyoto Protocol, it is therefore essential to find ways of reducing emissions from transport. In light of this objective, along with diversifying fuel supply sources and developing long-term replacements for fossil oil, the European Commission proposed targets for biofuels in transport fuel by 2020 among the member states. This binding is a part of long-term energy
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Future of Energy: The Global Challenge
77
package, which includes an overall binding 20% target for renewable energy. Under this, each member state will have to establish National Action Plans for their specific objectives and sectoral targets. Biofuels have been produced on an industrial scale in the Europe since the 1990s but production significantly accelerated starting in the early 2000s, largely in response to rising petroleum prices and favourable legislation passed by the EU institutions and member states. Biofuels have been promoted as part of the EU strategy to encourage renewable energy and their production and use has expanded rapidly. Although the EU measures have applied equally, most of the time, to biodiesel and ethanol, biodiesel production has developed at a faster rate. Biodiesel accounts for 80% of the European biofuels production and ethanol for the remaining 20%. European Union is by far the biggest producer of biodiesel in the world and the reason for the big share of biodiesel is that the majority of the cars in the EU are diesel cars, and as such, there is a diesel deficit. The most important feedstock for the EU biodiesel is rapeseed. Despite producing a significant portion of global biodiesel and increasing production of biofuels for transport, the EU faces a number of significant challenges in the coming years. Most important is the limited availability of land to cultivate biodiesel input crops such as rapeseed, although Ukraine’s EU accession could help alleviate this constraint. A further challenge is that even with the use of the most advanced production technologies, biofuels produced in the EU are not cost competitive with fossil fuels at current oil price levels. New input crops and production methods could make biofuels more competitive. Boilers efficiencies with biofuels are generally high (around 90%). Biofuel is a green and renewable fuel. Table 1. Energy costs of materials
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Material Concrete Cut wood (plywood) Glass Steel Steel from scrap Aluminum (recycled) Plastics, and high-density polyethylene
Energy cost to manufacture/process the material (MJ/metric tone) 600-800 ~500 (~400) 16000 21000 11000 164000 (18000) 81000
2.4. Landfill With rising concern about energy sources, landfill gas (LFG) has emerged as an easily available, economically competitive, and proven energy resource. As of January 2005, there were 375 LFG energy (LFGE) projects in the United States, generating electricity or providing direct-use energy sources for boilers, furnaces, and other applications. Approximately 100 direct-use LFGE projects in operation burned over 70 billion cubic feet (bcf) of LFGE in 2004. According to the US Environmental Protection Agency (EPA) Landfill Methane Outreach Programme (LMOP), still more than 600 landfills could be developed, offering a potential gas flow capacity of over 280 bcf per year. LFG is a byproduct of the decay process of organic matter in municipal solid waste (MSW) landfills. The gas typically contains approximately 50% methane and 50% carbon dioxide, with some
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
78
Mustafa Omer
additional trace compounds. The heat value of LFG ranges from 400 to 600 British thermal units (Btu) per cubic foot and can burn in virtually any application with minor adjustments to air/fuel ratios. The use of LFG provides environmental and economic benefits, and users of LFG have achieved significant cost savings compared to traditional fuel usage due primarily to the fact that LFG costs are consistently lower than the cost of natural gas. Additionally, because LFG is comprised of approximately 50% methane, a major GHG, reducing landfill methane emissions by utilising it as a fuel helps businesses, energy providers, and communities protect the environment and build a more sustainable energy future. This study on landfill gas treatment and utilisation examines the LFG industry and contains basic information about LFG, its composition, production, conditions affecting its production, movement, and transport; and health hazards and safety issues related to LFG. It also, contains an overview of LFG sampling, treatment procedures, control measures, regulatory requirements, and much more. This is a comprehensive information bank for decision makers in the energy industry and an information source for others interested in this rapidly growing industry. Landfilling is in the least referred tier of the hierarchy of waste management options: waste minimisation, reuse and recycling, incineration with energy recovery, and optimised final disposal. Over the past few decades, the fields of science and engineering have been seeking to develop new and improved types of energy technologies that have the capability of improving life all over the world. In order to make the next leap forward from the current generation of technology, scientists and engineers have been developing a new field of science called Nanotechnology. Nanotechnology refers broadly to a field of applied science and technology whose unifying theme is the control of matter on the molecular level in scales smaller than one micrometer, normally 1 to 100 nanometers, and the fabrication of devices within that size range. For scale, a single virus particle is about 100 nanometers in width. With nanotechnology, a large set of materials and improved products rely on a change in the physical properties when the feature sizes are shrunk. Nanoparticles for example take advantage of their dramatically increased surface area to volume ratio. Their optical properties, e.g., fluorescence, become a function of the particle diameter. When brought into a bulk material, nanoparticles can strongly influence the mechanical properties, such as the stiffness or elasticity. Example, traditional polymers can be reinforced by nanoparticles resulting in novel materials, e.g., as lightweight replacements for metals. Therefore, an increasing societal benefit of such nanoparticles can be expected. Consumption of energy by the end user is a result of a complex chain of energy generation, transportation and often conversion (Table 2).
2.5. Solar Energy Solar power is used synonymously with solar energy or more specifically to refer to the conversion of sunlight into electricity. This can be done either through the photovoltaic effect or by heating a transfer fluid to produce steam to run a generator. Solar energy technologies harness the sun's energy for practical ends. These technologies date from the time of the early Greeks, Native Americans and Chinese, who warmed their buildings by orienting them toward the sun. Modern solar technologies provide heating, lighting, electricity and even flight.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Future of Energy: The Global Challenge
79
Concentrated sunlight has been used to perform useful tasks from the time of ancient China. A legend claims Archimedes used polished shields to concentrate sunlight on the invading Roman fleet and repel them from Syracuse in 212 BC. Leonardo Da Vinci conceived using large-scale solar concentrators to weld copper in the 15th century. In 1866, Auguste Mouchout successfully powered a steam engine with sunlight, the first known example of a concentrating solar-powered mechanical device. Over the following 50 years, inventors such as John Ericsson, and Frank Shuman developed solar-powered devices for irrigation, refrigeration and locomotion. The progeny of these early developments are the concentrating solar thermal power plants of today. Concentrating solar thermal (CST) systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. This is then used to generate electricity. Moreover, the high temperatures produced by the CST systems can be used to provide process heat and steam for a variety of secondary commercial applications (cogeneration). However, the CST technologies require direct insulation to function and are of limited use in locations with significant cloud cover. The main methods for producing a concentrated beam are the solar trough, solar power tower and parabolic dish; the solar bowl is more rarely used. Each concentration method is capable of producing high temperatures and high efficiencies, but they vary in the way they track the sun and focus light. Now a new generation of modular technology based on advanced materials enables efficient conversion of solar energy and carries the seeds of a new industrial revolution. Table 2. Energy density of various materials
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Fuel Coal Wood (varies with type) Gasoline, petrol average Diesel Natural gas Methanol Hydrogen
Energy density By weight (kJ/gm) By volume (kJ/litre) 25.0 34000 6.0-17.0 1800-3200 44.0 31000 43.0 30000 50.0 32 (25000 as liquid) 19.5 15600 120.0 10 (10000 as liquid)
2.6. Fuel Cells This brief review explains the fuel cell market, identifies the current and future state of the fuel cell industry, and details industry initiatives and potential. It also, includes the Micro Fuel Cell Technology (MCFC) and Potential. Fuel cells provide direct current (DC) voltage that can be used to power motors, lights, or electrical appliances. Like batteries, fuel cells can be recharged while operating. They compete with other types of energy conversion devices such as gas turbines in power plants, gasoline engines in vehicles, and batteries in laptop computers. Fuel cells have the potential to become the dominant technology for automotive engines, power stations, and power packs for portable electronics. The percentage of fuel cell (PEMFC) units manufactured and sold by technology type has remained steady in recent years. Overall, the market continues to be dominated by the PEMFC, the most flexible and market-adaptable fuel cell technology. However, other types of fuel cells are slowly gaining acceptance, creating a more dynamic
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
80
Mustafa Omer
and robust industry. At the larger end of the fuel cell scale, molten carbonate cells (DMFC) are dominant, with Fuel Cell Energy selling the most MCFCs. Solid oxide cells (SOFC) are still struggling to make the jump from the research lab to the market and to find practical applications. Phosphoric acid fuel cell (PAFC) unit numbers remained practically unchanged in 2005 and thus the cumulative market share went down, but this trend is expected to change within two years when UTC releases a new enhanced PAFC with a lifespan of 80,000 operating hours, the highest in the market. A relatively new battleground is the residential or small stationary market. This is, in reality, two separate markets, and some companies are entering the fray with a focus on both back up and premium power or on residential power, rather than trying to sell into both markets. The main technology is proton exchange membrane, and a majority of units sold through 2005 was PEMFC. SOFC has a small but significant market share in this sector, and there has been talk of early commercialisation by several SOFC companies. Finally, the small portable and portable electronic markets are dominated almost entirely and in equal shares by PEMFC and DMFC technologies. Currently, DMFC has an edge, due to the market activities of one or two large companies. Several other technologies are also under investigation for use in small portable and portable electronic devices. Emerging fuel cell applications in the areas of transportation, industry, the home and consumer products speak to the enormous potential for this technology. Another important application for renewable energy is in the area of space travel. Since fuel cells do not rely on combustion, and thus do not produce air pollutants such as NOx (nitrogen oxides), SO2 (Sulpher dioxides), or particulates, fuel cell use can substantially reduce pollution caused by emissions as well as reduce oil dependency. Prices for operation will remain vulnerable to natural gas supplies, as most fuel cells currently employ natural gas, but this will change if/when a hydrogen economy is established. Cogeneration plants fuelled using waste gases provide an economic and environmentally friendly way of helping to satisfy the heat and power needs of industry or a community. Eventually renewable energies will dominate the world’s energy supply system. There is no real alternative. Humankind cannot indefinitely continue to base its life on the consumption of finite energy resources. Today, the world’s energy supply is largely based on fossil fuels and nuclear power. These sources of energy will not last forever and have proven to be contributors to our environmental problems.
2.7. Hydrogen Hydrogen is the simplest, lightest and most abundant element in the universe, making up 90% of all matter. It is made up of just one electron and one proton and is, therefore, the first element in the periodic table. A hydrogen economy is a hypothetical economy in which energy is stored and transported as hydrogen (H2). Various hydrogen economy scenarios can be envisaged using hydrogen in a number of ways. A common feature of these scenarios is using hydrogen as an energy carrier for mobile applications (vehicles and aircrafts). In the context of a hydrogen economy, hydrogen is not a primary energy source. Rather, hydrogen acts as a medium for energy. Nevertheless, issues of energy sourcing, including fossil fuel use, global warming and sustainable energy generation confuse controversy over
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Future of Energy: The Global Challenge
81
the usefulness of a hydrogen economy. While these are all separate issues, the hydrogen economy affects them all. Proponents of a hydrogen economy suggest that hydrogen is a cleaner source of energy to end-users, particularly in transportation applications, where hydrogen eliminates the release of pollutants (such as greenhouse gasses) at the point of end use. These advantages may hold similarly with the use of hydrogen produced with energy from fossil fuels, provided carbon capture or carbon sequestration methods are utilised at the site of energy or hydrogen production. Meanwhile, critics of a hydrogen economy argue that for many planned applications of hydrogen, direct use of energy in the form of electricity, chemical batteries and fuel cells and production of liquid synthetic fuels from CO2, might accomplish many of the same net goals of a hydrogen economy, while requiring only a small fraction of the investment in new infrastructure.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
2.8. Ethanol Production In 2003, the European Commission issued directives that will govern the European biofuels policy through 2010 and target of 5.75% biofuels consumption in the transportation sector by 2010. These include measures to increase ethanol demand and supply and providing tax benefits and exemptions to facilitate growth. The principal goals propelling bioethanol in the European countries are improving energy security, boosting rural development, and reducing greenhouse emission reductions. Transport is responsible for approximately 21% of the EU's greenhouse gas emissions, and recent European Commission directives have made biofuels in transport a regional priority. Not all member states are equally committed to the objectives set by the European Commission, but all are trying to some extent to achieve the EU targets. Biodiesel accounts for 80% of the European biofuels production and ethanol for the remaining 20%. Active market actors and lobbying groups have contributed immensely to the evolution of the market in recent years. However, some issues are concerning the overall growth of the ethanol industry in Europe. Most import among them has been the recent rise in the prices of food grains and subsequent decline in their supply. Moreover, many for this crisis have blamed biofuels, including ethanol. However, the supporters of ethanol and other biofuels suggest that the global food crises are a result of growing oil prices along with increased food consumption in developing world and declining yields of food crops. Another issue affecting further expansion of ethanol in Europe has been smaller number of cars in Europe that can run on Ethanol. Since a majority of the cars in the EU are diesel cars and there has been a diesel deficit the focus has been on biodiesel. However, bioethanol has the advantage over biodiesel that it can be produced from a much larger variety of different feedstock. Furthermore, the ethanol industry in the EU has also had problems to compete with cheap imports of bioethanol, especially from Brazil. These cheap imports have made it very difficult for the local industry to grow strong and manage without subsidies. Most of the cheap imports have come through a loophole in Sweden. This loophole was closed in January 2006, and there are now enormous amounts of planned bioethanol production plants in the EU. A further challenge is that even with the use of the most advanced production technologies, bioethanol produced in the EU are not cost competitive with fossil fuels. According to the most recent estimates, the European ethanol would only
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
82
Mustafa Omer
break even at an oil price of ($115) per barrel. New input crops and production methods could make ethanol more competitive. Lignocellulosic processing and biomass-to-liquid technologies have been mentioned as potential lower-cost alternatives to current technologies. Countries including Germany and the UK are actively promoting research into secondgeneration biofuels.
3. Nuclear
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
The nuclear power market potential suggests that nuclear power has the potential to help reduce dependence on fossil fuels and curb CO2 emissions in a cost-effective way, since its uranium fuel is abundant. However, governments must take a more active role in facilitating private investment, especially in liberalised electricity markets where the trade-off between security and low price has been a disincentive to investment in new plant and grid infrastructure. Investment of $20.2 trillion will be required by 2030 under the International Energy Association (IEA) alternative energy scenario, increasing nuclear capacity by 41% to 519 GWe and reducing energy demand by 10% and CO2 emissions by 16% compared with projections on present basis. Of this amount, $11.3 trillion will go for electricity, $5.2 trillion for generation, and the rest for transmission and distribution. The major issues affecting the nuclear power industry, including: • • • • • • •
Technologies for new nuclear facilities. Nuclear fuel cycle and nuclear waste disposal. Nuclear Regulation. Non proliferation goals. Energy security. Global nuclear energy partnership. Nuclear weapons.
Today, the world produces as much electricity from nuclear energy as it did from all sources combined in 1960. Civil nuclear power can now boast more than 12,400 reactor years of experience. Nuclear energy supplies 16% of global needs in 30 countries. Nuclear technology uses energy released by splitting the atoms of certain elements. Its applications range from bomb production to power generation. It was first developed in the 1940s, and during World War II research focused on producing bombs by splitting atoms of uranium or plutonium. In the 1950s, attention turned to peaceful applications for nuclear fission, notably power generation. Nuclear power generation is an established part of the world's electricity mix providing over 16% of the world’s electricity (cf. coal 40%, oil 10%, natural gas 15%, and hydro and other 19%). It is particularly suitable for large-scale, base-load electricity demand. Although fewer nuclear power plants are being built now than during the 1970s and 1980s, those that are operating produce more electricity. In 2005, production was 2626 billion kWh. The increase over the last five years (218 TWh) is equal to the output from 30 large new nuclear plants. Yet between 1999 and 2005, there was a net increase of only 2 reactors and 15 GWe). The rest of the increase is due to better performance from existing units. With the United Nations predicting the world’s population to increase from 6.4 billion in 2010 to 8.1 billion by 2030, demand for energy will
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
The Future of Energy: The Global Challenge
83
inevitably increase substantially. Both population growth and increasing standards of living for many people in developing countries will create strong growth in energy demand, expected to be 1.6% year or 53% from 2010 to 2030. Nuclear power is a type of nuclear technology involving the controlled use of nuclear reactions, usually nuclear fission, to release energy for work including propulsion, heat, and the generation of electricity. Nuclear energy is produced by a controlled nuclear chain reaction and creates heat - which is used to boil water, produce steam, and drive a steam turbine. A nuclear reactor is a device in which nuclear chain reactions are initiated, controlled, and sustained at a steady rate, as opposed to a nuclear bomb, in which the chain reaction occurs in a fraction of a second and is uncontrolled causing an explosion. The most significant use of nuclear reactors is as an energy source for the generation of electrical power and for the power in some ships. This is usually accomplished by methods that involve using heat from the nuclear reaction to power steam turbines. The United States produces the most nuclear energy, with nuclear power providing 20% of the electricity it consumes, while France produces the highest percentage of its electrical energy from nuclear reactors 80% as of 2006. In the European Union as a whole, nuclear energy provides 30% of the electricity. Nuclear energy policy differs between the European Union countries, and some, such as Austria and Ireland, have no active nuclear power stations. In comparison, France has a large number of these plants, with 16 multi-unit stations in current use. Analysis of the major nuclear power plants in the United States takes a view of the overall nuclear power industry worldwide, with an analysis of the basics of nuclear power and an overview of the nuclear power industry in the United States (75 plants). Conventional, and centralised electricity networks are the norm in the developed world. However, the present energy infrastructure of the developed countries was mainly created during the monopolistic utility era of the past. Throughout the energy generation process, there are impacts on the environment on local, national and international levels, from opencast mining and oil exploration to emissions of the potent greenhouse gas carbon dioxide in ever increasing concentration. Recently, the world’s leading climate scientists reached an agreement that human activities, such as burning fossil fuels for energy and transport, are causing the world’s temperature to rise. The Intergovernmental Panel on Climate Change has concluded that ‘‘the balance of evidence suggests a discernible human influence on global climate’’. Indeed, people already are waking up to the financial and social, as well as the environmental, risks of unsustainable energy generation methods that represent the costs of the impacts of climate change, acid rain and oil spills.
4. Distributed Generation The established system of electricity generation in the United States involves the use of large power plants transmitting power across distances (transmission) and then carrying it through local utility lines (distribution). The practice of installing and operating electric generating equipment at or near the site of where the power is used is known as "distributed generation" (DG). Distributed generation provides electricity to customers on-site or supports a distribution network, and connecting to the grid at distribution level voltages. DG technologies include engines, small (and micro) turbines, fuel cells and photovoltaic systems.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
84
Mustafa Omer
Distributed generation may provide some or all of customers’ electricity needs. Customers can use DG to reduce demand charges imposed by their electric utility or to provide premium power or reduce environmental emissions. DG can also be used by electric utilities to enhance their distribution systems. Many other applications for DG solutions exist. Commercial and industrial facilities can generate enough power to meet their needs using existing technologies. This also gives them the ability to have back-up power during times of blackout. Distributed generation systems can provide an organisation with the following benefits:
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
• • • • • •
Peak Shaving; On-site backup power during a voluntary interruption; Primary power with backup power provided by another suppliers; Combined load heat and power for own uses; Load following for improved power quality or lower prices; To satisfy the preference for renewable energy.
In conjunction with combined heat and power (CHP) applications, DG can improve overall thermal efficiency. On a stand-alone basis, DG is often used as back-up power to enhance reliability or as a means of deferring investment in transmission and distribution networks, avoiding network charges, reducing line losses, deferring construction of large generation facilities, displacing expensive grid-supplied power, providing alternative sources of supply in markets, and providing environmental benefits. In recent years, DG has become an efficient and clean alternative to traditional distribution systems. Moreover, recent technologies are making it economically feasible. Substantial efforts are being made to develop environmentally sound and cost-competitive small-scale electric generation that can be installed at or near points of use in ways that enhance the reliability of local distribution systems or avoid more expensive system additions. Examples of these distributed resources include fuel cells, small gas turbines and photovoltaic arrays.
1980s
Early 1900s
Biomass Animals
M id 1800s
Coal Oil Natural Gas
Renaissance Pre-animal domestication 0%
20%
40%
60%
80%
Figure 1. Global energy sources through ages. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
100%
The Future of Energy: The Global Challenge
85
For a bird’s eye view of history and pre-history, human development is not only by materials, as is often done (stone, bronze, iron, and plastics-silicon) but also by energy type, such as human, animal, water, wind, peat, coal and oil gas (Figure 1). Despite the surge in oil prices, both recently and in the mid 70s and early 80s of the 20th century, the fraction of natural resources in one dollar of product steadily declined, i.e., we learn to produce more and more from the same amount of the natural resources. Changes in the of various natural resources actually work as a stimulus to develop alternatives.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
5. Hydropower Potential The growing worldwide demand for renewable energy projects is being driven by ever increasing global energy consumption and the availability of carbon and renewable energy credits. Renewable energy is entering a new phase with additional funding becoming available from governments, from socially responsible equity funds, and from public capital raisings. Hydropower is the capture of the energy derived from moving water for some useful purpose. Prior to the widespread availability of commercial electric power, hydropower was used for irrigation, milling of grain, textile manufacture, and the operation of sawmills. There are various aspects of hydropower including harnessing ocean energy, hydroelectric dams and micro hydropower systems. Hydropower explores the factors associated with utilising the actual potential of hydropower energy. It also, covers all the technological details, along with issues and challenges faced during the utilisation of hydropower energy. Major projects, power plants, players in the industry, the major role of the global hydropower industry, and the various environmental benefits of using hydropower energy are all explored in depth in this section. Hydropower produces essentially no carbon dioxide or other harmful emissions. In contrast to burning fossil fuels, this energy is not a significant contributor to global warming through production of CO2. Hydroelectric power can be far less expensive than the electricity generated from fossil fuel or nuclear energy. Areas with abundant hydroelectric power attract industry. Environmental concerns about the effects of reservoirs may prohibit development of economic hydropower sources in some areas. Hydropower currently accounts for approximately 20% of the world's electricity production, with about 650,000 MW installed and approximately 135,000 MW under construction or in the final planning stages. Notwithstanding this effort, there are large untapped resources on all continents, particularly in areas of the world that are likely to experience the greatest growth in power demand over the next century. It is estimated that only about a quarter of the economically exploitable water resources has been developed to date, leaving the potential for hydro to continue to play a large role in sustaining renewable global electricity production in the future. Apart from a few countries with abundance, hydropower is normally applied to peak load demand because it can be readily stopped and started. Nevertheless, hydroelectric power is probably not a major option for the future of energy production in the developed nations, however, because most major sites within these nations are either already being exploited or are unavailable for other reasons, such as environmental considerations.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
86
Mustafa Omer
Large-scale, conventional, power plant such as hydropower has an important part to play in development. It does not, however, provide a complete solution. There is an important complementary role for the greater use of small-scale, rural based, and power plant. Such plant can be used to assist development since it can be made locally using local resources, enabling a rapid built-up in total equipment to be made without a corresponding and unacceptably large demand on central funds. Renewable resources are particularly suitable for providing the energy for such equipment and its use is also compatible with the long-term aims.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
6. Biomass Future waste management programmes must be put into practice in conjunction with sound policies that restrict the use of fossil fuels and natural resources and contribute to the reduction of emissions into the environment. Such a strategy should be based in a sound scientific basis, without ideology, politics or financial interests. It should be implemented on a worldwide basis and not limited to industrialised countries. To achieve this goal, existing waste management options must be evaluated for implementation, new strategies must be formulated and new, innovative solutions have to be found. There is strong scientific evidence that the average temperature of the earth’s surface is rising. This was a result of the increased concentration of carbon dioxide (CO2), and other greenhouse gases (GHGs) in the atmosphere as released by burning fossil fuels. This global warming will eventually lead to substantial changes in the world’s climate, which will, in turn, have a major impact on human life and the environment. Energy use reductions can be achieved by minimising the energy demand, by rational energy use, by recovering heat and the use of more green energies. This study was a step towards achieving this goal. The adoption of green or sustainable approaches to the way in which society is run is seen as an important strategy in finding a solution to the energy problem. The key factors to reducing and controlling CO2, which is the major contributor to global warming, are the use of alternative approaches to energy generation and the exploration of how these alternatives are used today and may be used in the future as green energy sources. Even with modest assumptions about the availability of land, comprehensive fuel-wood farming programmes offer significant energy, economic and environmental benefits. These benefits would be dispersed in rural areas where they are greatly needed and can serve as linkages for further rural economic development. The nations as a whole would benefit from savings in foreign exchange, improved energy security, and socio-economic improvements. With a nine-fold increase in forest – plantation cover, the nation’s resource base would be greatly improved. The international community would benefit from pollution reduction, climate mitigation, and the increased trading opportunities that arise from new income sources. The non-technical issues, which have recently gained attention, include: (1) Environmental and ecological factors, e.g., carbon sequestration, reforestation and revegetation. (2) Renewables as a CO2 neutral replacement for fossil fuels. (3) Greater recognition of the importance of renewable energy, particularly modern biomass energy carriers, at the policy and planning levels. (4) Greater recognition of the difficulties of gathering good and reliable biomass energy data, and efforts to improve it. (5) Studies on the detrimental health efforts of biomass energy particularly from traditional energy users. This section discusses a brief review of biomass
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Future of Energy: The Global Challenge
87
energy sources, environment and sustainable development. This includes all the biomass energy technologies, energy efficiency systems, energy conservation scenarios, energy savings and other mitigation measures necessary to reduce climate change. Bioenergy is energy from the sun stored in materials of biological origin. This includes plant matter and animal waste, known as biomass. Plants store solar energy through photosynthesis in cellulose and lignin, whereas animals store energy as fats. When burned, these sugars break down and release energy exothermically, releasing carbon dioxide, heat and steam. The by-products of this reaction can be captured and manipulated to create power, commonly called bioenergy. Biomass is considered renewable because the carbon is taken out of the atmosphere and replenished more quickly than the millions of years required for fossil fuels to form. The use of biofuels to replace fossil fuels contributes to a reduction in the overall release of carbon dioxide into the atmosphere and hence helps to tackle global warming. The range of waste treatment technologies that are tailored to produce bioenergy is growing. There are a number of key areas of bioenergy from wastes including (but not limited to) biogas, biofuels and bioheat. When considering using bioenergy, it is important to take into account the overall emission of carbon in the process of electricity production. Table 3. Sources of energy
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Energy source
Energy carrier
Vegetation
Fuel-wood
Oil
Kerosene
Dry cells
Dry cell batteries
Muscle power
Animal power
Muscle power
Human power
Energy end-use Cooking Water heating Building materials Animal fodder preparation Lighting Ignition fires Lighting Small appliances Transport Land preparation for farming Food preparation (threshing) Transport Land preparation for farming Food preparation (threshing)
The biomass energy resources are particularly suited for the provision of rural power supplies and a major advantage is that equipment such as flat plate solar driers, wind machines, etc., can be constructed using local resources and without the high capital cost of more conventional equipment. Further advantage results from the feasibility of local maintenance and the general encouragement such local manufacture gives to the build up of small-scale rural based industry. Table 3 lists the energy sources available. Considerations when selecting power plant include the following: • •
Power level- whether continuous or discontinuous. Cost- initial cost, total running cost including fuel, maintenance and capital amortised over life.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
88
Mustafa Omer • • • •
Complexity of operation. Maintenance and availability of spares. Life. Suitability for local manufacture.
In addition to the drain on resources, such an increase in consumption consequences, together with the increased hazards of pollution and the safety problems associated with a large nuclear fission programmes. This is a disturbing prospect. It would be equally unacceptable to suggest that the difference in energy between the developed and developing countries and prudent for the developed countries to move towards a way of life which, whilst maintaining or even increasing quality of life, reduce significantly the energy consumption per capita. Such savings can be achieved in a number of ways: • • •
Improved efficiency of energy use, for example better thermal insulation, energy recovery, and total energy. Conservation of energy resources by design for long life and recycling rather than the short life throwaway product. Systematic replanning of our way of life, for example in the field of transport.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
7. Sustainable Development The concept of sustainable development (SD) is multi-dimensional and it is no accident that the European Union’s (EU’s) policies on national cooperation. Stress research on a multidisciplinary approach to sustainability such as sustainable agriculture and management of natural resources. The different dimensions of SD are illustrated in Figure 2 as a framework that involves all issues such as science, technology, economic growth and development, health, information and communication technologies (ICTs), education, international debt and aid, trade, policies, war, natural disasters, population growth, terrorism, etc. [1]. In order to counteract the effect of global warming, the pressure to reduce the environmental impact of power and heat production is growing. Reduction of the amount of carbon dioxide emitted during the generation of heat and power is seen as the key to reducing impacts on the environment. However, it is expected that the demand for power will increase dramatically over the next ten to 20 years, driven by the continuing economic expansion in countries such as China and India. The economic and environmental impact of cogeneration can be further enhanced by utilising waste gases such as refinery and coke oven gases. It is also worth nothing that application of cogeneration-to-waste gases in some regions can qualify for carbon credits. This feature will concentrate on gaseous fuels, with Figure 3 showing the main families of gaseous fuels available for firing gas turbines.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Future of Energy: The Global Challenge
Human Need
Reviving Economic Growth
Individual
89
Poverty Reduction
Communities
Government Education
Technology Management
Firms
Households
NGOs
Resource Conservation
Sustainable Population Levels
Figure 2. Dimensions of sustainable development [2].
Blast furnace Coal gas from gasification process
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Steel process gas Coke oven gas Coalmine gas Landfill gas Sewage/digester gas Natural gas LNG Refinery gas LPG
10
20
30
40
50
60
70
80
Heat value (MJ/m3)
Figure 3. Gaseous fuel types categorised by heating value.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
90
Mustafa Omer
8. Cost Comparison of Diesel and Wind Pumps The basic purpose of the CWD programme of the mid 1980s was for wind energy to play a significant role in meeting the rural energy needs in Sudan. This depended on a new generation of low-cost wind pump designs, which should be simple enough for local manufacture to evolve. Therefore, the CWD of the Netherlands carried out consideration R & D for the Sudan wind energy project. These activities resulted in the development of acceptable wind pump designs that are suitable for application in Sudan, though the range of application may still be limited to low/medium head situations [2]. Most of the wind pumps are performing well, yet still have difficulties due to factors such as: (1) Wide range in wind conditions from low wind speed to desert storm. (2) Varied requirements of water. (3) Insufficient knowledge by the end users about site selection, such as wind turbulence, which requires continuous study and design changes to suit the customer requirements. (4) The price of wind pumps is too high for further market penetration. (5) Lack of credit schemes for users and too little user orientation. (6) Lack of reliable cost effective wind pump designs for extension to certain market areas such as small farmers (needing small wind pumps), large irrigation areas, salt production and fish farms (needing large wind pumps with rotor diameter over 7.5 m).
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Two systems are compared: (1) A borehole of 35-40 m depth with an 18 HP = 13.3 kW diesel engine powered pump. (2) A borehole of 25-30 m depth with a modified CWD 5000 wind pump. A tentative cost comparison is shown in Table (4), using the formula: CT = (A+ FP + M)/V
(1)
Where CT is the total annual cost, and A = [C x I x (I+1)]T/[I+1]T-1
(2)
Where A is the annual cost of capital [3]; C is the initial capital cost; I is the interest rate or discount rate; and T is the lifetime. F is the total annual fuel consumption; P is the fuel cost per unit volume; M is the annual maintenance cost; and V is the volume of water pumped. The comparison indicates that the necessary fuel and maintenance needed to run the diesel pump unit long-term are the main lifetime costs, and not the capital cost of the diesel pump itself. In the Sudan, where the fuel is expensive, the supply is uncertain, the infrastructure is poor, and where there are many populated remote areas, the following is concluded: (1) The initial investment cost of wind pumps is too high; this may be a manufacturing scale problem.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Future of Energy: The Global Challenge
91
(2) Maintenance costs in some areas are too high for the user. (3) The lifetime pumping costs are similar for pumping water by wind pump and by diesel pump. (4) Parallel and integrated projects could reduce costs. (5) Local production is favoured. (6) Utilities and water authorities should have responsibilities for technology and investment. (7) There are substantial power production fluctuations due to variation in wind speed, and so using water storage is beneficial.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
1 US $ = S.D. 250 (Sudanese Dinar), in January 2006. Annual output 15,000-20,000 m3 of water. Annual fuel consumption: 490 gallons (1 imperial gallon = 4.55 litre) at price S.D. 475 per gallon. The CWD 5000 has not proved to be a reliable, commercially viable design. After early problems with the furling mechanism and the pump itself were overcome, failures of the head frame assembly and the crank arm on many machines, proved that various design weaknesses urgently needed to be rectified. Nevertheless, there are technically appropriate applications in Sudan for well-designed, and reliable wind pumps. The Kijito wind pumps, for example has been field-tested in Kenya and Botswana and found to be a reliable (albeit expensive) machine. The cost comparison table indicates that the necessary fuel and maintenance needed to run the diesel pump unit are the main factors that govern the overall cost, and not the capital cost of the diesel pump itself. The maintenance cost for the CWD 5000 however, was too high but this is entirely attributed to its bad design. Therefore, in the case of Sudan where the fuel is expensive, the supply is uncertain, the infrastructure is poor and areas are remote; the use of wind machines become more cost-competitive with diesel as the demand and head decrease and fuel prices and transport distances increase. The following can be deduced from the cost comparison case: • • • • • • • •
Initial cost of the wind pump was high compared to diesel pump. Costs of the maintenance of wind pumps were exceptionally high. Water pumping cost was more or less the same for both. The lifetime pumping costs are similar for pumping water by wind pump and by diesel pump. Parallel and integrated projects could reduce costs. Local production is favoured. Utilities and water authorities should have responsibilities for technology and investment. There are substantial power production fluctuations due to variation in wind speed, and so using water storage is beneficial.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
92
Mustafa Omer Table 4. Cost comparison of diesel and wind pumps in Sudanese Dinar (S.D.) Specification Cost of borehole deep well Cost of the system (purchased or fabricated in Sudan) Cost of storage tank Cost of annual fuel consumption Cost of maintenance and repair Total annual cost Specific water pumping cost/S.D.
Diesel pump 182,400 93,600 343,700 120,000 1,582,100 79 per m3
Wind pump 114,000 440,000 420,000 110,000 1,084,000 54 per m3
Table 5. Classifications of data requirements Existing data
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Future data
Plant data Size Life Cost (fixed and var. O&M) Forced outage Maintenance Efficiency Fuel Emissions All of above, plus Capital costs Construction trajectory Date in service
System data Peak load Load shape Capital costs Fuel costs Depreciation Rate of return Taxes System lead growth Fuel price growth Fuel import limits Inflation
The data required to perform the trade-off analysis simulation can be classified according to the divisions given in Table 5 the overall system or individual plants, and the existing situation or future development. Today the market situation and rules have permanently changed. Progressive modern utilities have left the past behind and are striving towards a modern, and competitive energy system. Economic importance of environmental issue is increasing, and new technologies are expected to reduce pollution derived from both productive processes and products, with costs that are still unknown. This is due to market uncertainty, weak appropriability regime, lack of a dominant design, and difficulties in reconfiguring organisational routines. The degradation of the global environment is one of the most serious energy issues. Various options are proposed and investigated to mitigate climate change, acid rain or other environmental problems. Additionally, the following aspects play a fundamental role in developing environmental technologies, pointing out how technological trajectories depend on both exogenous market conditions and endogenous firm competencies: (1) Regulations concerning introduction of Zero Emission Vehicles (ZEV), create market demand and business development for new technologies. (2) Each stage of technology development requires alternative forms of division and coordination of innovative labour, upstream and downstream industries are involved in new forms of inter-firm relationships, causing a reconfiguration of product architectures and reducing effects of path dependency.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Future of Energy: The Global Challenge
93
(3) Product differentiation increases firm capabilities to plan at the same time technology reduction and customer selection, while meeting requirements concerning network externalities. (4) It is necessary to find and/or create alternative funding sources for each research, development and design stage of the new technologies.
8.1. Privatisation and Price Liberalisation in Energy Source Supplies It is useful to codify all aspects of sustainability, thus ensuring that all factors are taken into account for each and every development proposal. Therefore, with the intention of promoting debate, a sustainability matrix is presented (Table 6). The following considerations are proposed: Long-term availability of the energy source or fuel. Price stability of energy source or fuel. Acceptability or otherwise of by-products of the generation process. Grid services, particularly controllability of real and reactive power output. Technological stability, likelihood of rapid technical obsolescence. Knowledge base of applying the technology. Life of the installation – a dam may last more than 100 years, but a gas turbine probably will not. (8) Maintenance requirement of the plant.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
(1) (2) (3) (4) (5) (6) (7)
The privatisation and price liberalisation in energy fields has secured to some (but not fully). Availability and adequate energy supplies to the major productive sectors. The result is that, the present situation of energy supplies is for better than ten years ago. The investment law has also encourage the participation of the investors from the national level as well as from the international friendly and sisters’ countries to invest in energy sources supply such as: • •
Petroleum products (import in particular). Electricity generation (in some states) through providing large diesel engine units.
The readily implementation of electricity price liberalisation has some extent release the National Electricity Corporation (NEC) from the heavy dependency of government subsidies, and a noticeable improved of NEC management, and electricity supplies are achieved. Recent techniques for economically valuing environmental impacts: • • • • • •
Effect on production. Effect on health. Defensive or preventive costs. Replacement cost and shadow projects. Travel cost. Property value.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
94
Mustafa Omer •
Wage differences (the wage differential method attempts to relate changes in the wage rate to environmental conditions, after accounting for the effects of all factors other than environment (e.g., age, skill level, job responsibility, etc.) that might influence wages). Table 6. Sustainability matrixes
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Power categories Conventional coal fired stream plant Oil fired stream plant Combined cycle gas turbine Micro combined heat and power Nuclear Hydropower Tidal power Onshore wind Offshore wind Land-fill gases Municipal incineration Biomass, field and forest crops plus waste straw Import Hydro pumped storage Electrochemical storage Diesel
1* fuel availability 2* price stability of fuel 3* by-product acceptability 4* grid services 5* technological obsolescence 6* knowledge base 7* life of the installation 8* maintenance requirement 9* infrastructure requirements
1* 3
2* 1
3* 1
4* 5
5* 1
6* 1
7* 4
8* 3
9* 3
Index 22
2 2 2 4 5 5 5 5 3 5 5
1 3 3 4 5 5 5 5 5 5 5
1 2 2 3 5 5 5 5 3 4 4
5 4 4 5 3 2 2 2 1 3 3
3 4 4 4 5 5 5 5 3 4 4
3 4 4 4 5 5 5 5 4 4 4
4 4 3 3 5 5 4 3 4 4 4
3 2 2 2 4 4 4 4 3 3 3
3 4 4 3 2 2 3 4 2 4 4
25 29 29 32 39 38 38 38 28 36 36
1 2
1 1
5 5 4 1
1 5 4 1
5 5 4 4
5 5 4 5
5 5 4 3
5 5 4 4
5 2 5 4
33 32 29 25
Table 7. comparative production costs Plant type Solar water heating Solar photovoltaic Municipal solid waste Biomass (direct combustion) Landfill gas Wind
Levelised energy production cost (Cents/kilowatt-hour) 4.0-8.0 13.0-32.0 3.5-15.3 6.3-11.0 2.4-6.3 3.0-6.5
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Future of Energy: The Global Challenge
95
80 Efficiency
70
Renewables
60
Fusion
R&D(£)
50 40 30 20 10 0 1974
1980
1984
1988
1992
1996
2000
2004
Year
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 4. Energy R&D investment.
The economic and environmental impact of cogeneration can be further enhanced by utilising waste gases such as refinery and coke oven gases. Table 7 summarises some renewable energy production costs. It is also worth nothing that application of cogenerationto-waste gases in some regions can qualify for carbon credits. There was a killing in October 2008, but no blood was spilled, shots fired, or screams piercing the night and it was not reported to police nor mentioned in the newspaper. As the world’s economies started to crumble in the aftermath of the global financial meltdown. When people have jobs, food and prosperity it is possible to contemplate a greener existence. Nevertheless, when trillions of dollars of wealth evaporate overnight and economic panic sets in, priorities are instantly rearranged. The post-oil crisis wave of R&D funding, followed by a progressive decline, illustrated in Figure 4. Water is essential for life and for most activities of human society. Both economic and social development and the maintenance of human health are completely dependent upon ready access to adequate water supplies. All societies require water both for basic survival and for economic development. The indicator of naturally available water resources per capita has become the standard index for measuring the degree to which a country is facing water scarcity and is often used to show a growing global water crisis. Problems satisfying water resource needs or demands are affecting a growing proportion of the world, primarily in arid and semi-arid regions where population pressures are considerable and demand for water is currently rising faster than at anytime previously [4]. The relatively comprehensive development indicator database of the United Nations Development Programme (UNDP) and the water resources database of the Food and Agricultural Organisation (FAO) provide no support for the notion that the naturally available water resources of a country have a significant effect on the ability of that country to meet the basic needs of its population. Where water supply costs are high, the water sector will directly contribute more to a nation’s growth domestic product (GDP) compared to where costs are low because the sector will be more significant economically. With increasing globalisation of trade, global water
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
96
Mustafa Omer
interdependencies and overseas externalities are likely to increase. At the same time, liberalisation of trade creates opportunities to increase physical water savings. International water dependencies are substantial and are likely to increase with continued global trade liberalisation.
8.2. Greenhouse Gases Emissions (GHGs) Global warming is one of the great eco-societal challenges facing humankind. Global average temperatures have risen by 0.6oC over past century and if the current trend is maintained, this could add between 1.5-5oC over the next 100 years [5]. Global warming is serious problem and some limit on GHGs needs to be established. Also, a full carbon inventory can be created to take account of land-surface changes. The fossil fuel market is well defined and contains a manageable amount of producers with good data on production (Table 8). Wind power is far from the only clean energy sector on the rise and many of the technologies following in its tracks are much more decentralised, including roof-top systems like photovoltaics (PVs) or solar thermal and energy efficiency technologies on the demand side. Solar and energy efficiency were actually the two largest sectors in terms of venture capital investment, with solar bringing in 30% and efficiency 18% (Figure 5). Besides the high level of early stage investment, mostly focused on new technology development, these two sectors also fared well on the public stock markets, ranking second and third after wind. Solar would have overtaken wind on the public markets.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Table 8. Carbon dioxide emissions from fossil fuel combustion (MMt) Fossil fuels Petroleum products Natural gas Coal Total
Emissions (MMt) 10.850 5.600 10.600 27.050 Solar Efficiency
6%
14%
30%
8%
Biofuels Biomass
8% 16%
18%
Wnd Other renewables Low carbon technologies
Figure 5. Investment in renewable and energy efficiency.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Future of Energy: The Global Challenge
97
Conclusions We love in a society of unprecedented: consumption is the norm. Nevertheless, people are only just starting to be concerned about the process that gets products onto the shelves and the effect of their use on our planet. The massive increases in fuel prices over the last years have however, made any scheme not requiring fuel appear to be more attractive and to be worth reinvestigation. Economic projections are difficult at the best of times, when economies are relatively stable and a reference ‘business as usual’ case can be used. However, there are numerous signals that the world faces very turbulent economic conditions for a while-a credit crunch may make some project finance difficult and the shortage of raw materials could lead to supply chain difficulties. However, the rapidly escalating price of oil is focusing a lot of attention on the price of energy and the hedge of electricity supply without a fuel cost is likely to become increasingly attractive to many companies and utilities. At some stage, rising fuel costs could lead to demand for wind energy becoming almost infinite. The main factors expected to influence the continuing growth of the energy sector are: • • • • •
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
• •
The economies of the transition states (Russia and Central Asia) will start to grow. Increasing energy demand in Asia and South America. Oil prices will continue to remain high as will demand for fossil fuels. Continuing competitiveness of renewables with fossil fuels. Many countries may find they are well off their international CO2 reduction commitments and need to install some new renewable capacity very quickly. Security of supply questions will continue to support renewable technologies. Deregulated markets will remove excess conventional power capacity and new capacity is likely to be more expensive than wind.
References [1] Ahmed, A. Making technology work for poor: strategies and policies for African sustainable development. International Journal of Technology, Policy and Management, 4(1), 1-17. 2004. [2] Omer, AM. Wind speeds and wind power potential in Sudan. In: Proceedings of the 4th Arab International Solar Energy Conference. Amman: Jordan. 1993. [3] Gingold, PR. The cost-effectiveness of water pumping windmill. Wind Engineering. Vol. 3. Multi-Science Publishing Company. 1979. [4] Rodda, JC. Water under pressure. Hydrological Sciences- Journal des Sciences Hydrologiques, 46(6), 841-854. 2001. [5] Meyer, A. Contraction and convergence: the global solution to climate change. Green Books. 2000.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
In: Advances in Energy Research, Volume 1 Editor: Morena J. Acosta, pp. 99-132
ISBN: 978-1-61668-994-0 © 2010 Nova Science Publishers, Inc.
Chapter 3
TROPICAL CYCLONE-OCEAN INTERACTION: CLIMATOLOGY Akiyoshi Wada* Meteorological Research Institute, Tsukuba Ibaraki, Japan
Abstract
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
The ocean is an energy source for developing tropical cyclones (TCs) that originate over the tropical oceans. Warm water and winds are crucial factors for determining heat and moisture fluxes from the ocean to the atmosphere. These fluxes are closely associated with cumulus convection and large-scale condensation due to latent heat release in the upper troposphere. Both physical processes are essential for increasing the upper tropospheric warm-core temperature around a TC. Therefore, warm water over the tropical oceans is required to generate and intensify TCs. Recently, tropical cyclone heat potential (TCHP), a measure of the oceanic heat content from the surface to the 26°C-isotherm depth, is frequently used for monitoring TC activity in global oceans, particularly in the Atlantic and western North Pacific. Recent studies have reported that TC intensity was correlated with accumulated TCHP (ATCHP), calculated as a summation of TCHP every six hours from TC genesis upon first reaching categories 4 and 5 of the Saffir-Simpson scale, as well as sea-surface temperature (SST) and TC duration. This implies that both SST and upper ocean stratification such as temperature, salinity, and mixed-layer and seasonal-thermocline depths play crucial roles in determining TC intensity and intensification. Conversely, TCHP can be varied by mixed-layer deepening and Ekman pumping induced by TC passage through TC-induced seasurface cooling (SSC). The SSC is evidence that the ocean energy is consumed for developing and sustaining TCs. In that sense, a climatological map of TCHP distribution is valuable for acquiring the potential of TC activity. A 44-year mean climatological TCHP distribution in the North Pacific indicates that TCHP is locally high in the Southern Hemisphere Central Pacific (SCP) and Western North Pacific (WNP). TCHP varies on interannual and decadal time scales and is related to TC activity. The relatively low TCHP in the WNP is associated with an increase in the total number of TCs. This may indicate that low TCHP is caused by the frequent TC-induced SSC. When an El Niño event enters the mature phase, it leads to an increase in the number of super typhoons corresponding to categories 4 and 5. The increase in the number of super typhoons is related to an increase in ATCHP due to the trend of long*
E-mail address: [email protected]. TEL: +81-29-852-9154 FAX: +81-29-853-8735. Meteorological Research Institute, 1-1 Nagamine Tsukuba Ibaraki, 305-0052 Japan. (Corresponding author)
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
100
Akiyoshi Wada duration TCs. This chapter addresses the benefits of TCHP as a useful ocean-energy parameter for monitoring interannual and decadal variability of TC activity in the global ocean.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
1. Introduction One of the factors influencing tropical cyclone activity (TCA) is upper-ocean thermal forcing. A representative upper-ocean thermal forcing for the atmosphere is sea-surface temperature (SST). It is commonly believed that high SST (over 26ºC) is required for tropical cyclone (TC) genesis [1]. In addition, SST is often used for calculating TC maximum potential intensity (MPI) based on a simple Carnot cycle theory [2]. Indeed, the relationship between analyzed (best-track) TC intensity and SST is utilized for statistically predicting the MPI using the National Hurricane Center Statistical Hurricane Intensity Prediction Scheme (SHIPS) [3-5]. The ocean is an energy source in a TC that originates in the tropical or subtropical ocean. The ocean can supply heat and moisture to a TC. The heat and moisture fluxes are determined from the SST, surface air temperature, surface moisture and surface wind. Estimating heat and moisture fluxes at the interface between the atmosphere and the ocean is important for determining air-sea interactions. However, it is difficult and very expensive to observe the heat and moisture fluxes directly. That is why we usually use an empirical bulk formula with bulk exchange coefficients for estimating sensible (heat) and latent (moisture) fluxes as well as momentum flux (in general, wind stress)[6]. The sensible and latent fluxes are associated with cloud physics, cumulus convection, boundary-layer physics and radiation. A TC is intensified by warming at the warm core of a TC. The warm-core warming is in turn induced by the latent heat release through large-scale condensation and cumulus convection. The above-mentioned atmospheric response on the warm ocean is valid on a weather-forecasting time scale and is acceptable on a seasonal to climate time scale. Recently, both SST and upper-ocean temperature and salinity profiles have begun to garner attention. The upper ocean heat content in the tropical or subtropical oceans is a primary factor for understanding TCA and TC-ocean interaction. Which SST or upper-ocean thermal structure is more important for MPI and TC intensification continues to be controversial [7-9]. The controversy may be attributed to a difference in temporal and spatial scales and basins. SST and upper ocean heat content can vary due to TC-induced sea-surface cooling (SSC). SSC is produced mainly by vertical turbulent mixing (one-dimensional process) and Ekman pumping (three-dimensional process) and ranges from 0° to 2°C around the inner core of a TC versus 4° to 5°C over the cold wake [10]. This implies that a TC can locally change the distributions of SST and upper ocean heat content. The oceanic response to a TC is well known on a weather-forecasting time scale. However, whether or not the oceanic response to TCs is significant on a seasonal to climate time scale is open to dispute. A recent study reported that approximately 15 per cent of the peak ocean heat transport may be associated with the vertical mixing induced by TCs, indicating that the oceanic responses to TCs affect the global climate [11]. Here we address the relationship between TCA in the western North Pacific (WNP) and upper-ocean thermal forcing on seasonal to climate time scales. It should
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Climatology
101
be noted that TCA includes the frequencies of TC genesis, their locations and their intensities represented by maximum wind speed or central sea-level pressure.
2. SST and TCHP
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
2.1. SST It is generally believed that SST is the most important factor in an air-sea interaction system in the global ocean. The sea surface is the lower boundary of the atmosphere, and its temperature affects the weather and climate, while SST is controlled by atmospheric conditions. SSTs have been observed for more than a century by voluntary ships, research vessels, drifting and moored buoys, satellites, and in situ sensors. In fact, SST data is the most abundant dataset in oceanography. However, we do not have enough SST data around a TC because of the difficulty of observations by voluntary ships and research vessels and obscured observations by satellites and in situ sensors due to thick clouds. Here we use the Tropical Rainfall Measuring Mission (TRMM)/TRMM Microwave Imager (TMI) three-day mean SST (http://www.ssmi.com/tmi/tmi_3day.html) for investigating the relationship between SST and TC intensity. The TRMM/TMI three-day mean SST covers a global region extending from 40ºS to 40ºN with a horizontal grid spacing of 0.25º. One of the benefits of using the TRMM/TMI dataset is that the TRMM/TMI microwave retrievals can measure SST through clouds. The TRMM/TMI products began being distributed in December 1997. In addition to the TRMM/TMI three-day mean SST dataset, the daily Microwave Optimally Interpolated (OI) SST (OISST) dataset (http://www.ssmi.com/sst/microwave_oi_sst_browse.html) is another useful product. The MW OI SST dataset covers the global ocean with a horizontal grid spacing of 0.25°, including data obtained from TRMM/TMI microwaves retrievals and Aqua/Advanced Microwave Scanning Radiometer for Earth Observing System (AMSR-E) satellite radiometers. The MW OI products began being distributed in May 2002, which is later than the TRMM/TMI products. To extend the period for investigating the relationship between SST and TC intensity, we use version 4 of the TRMM/TMI three-day mean SST dataset from 1998 to 2007, updated from 13 September 2006. We must pay attention to representative depths of SST [12-13]. The TRMM/TMI measures SST at a depth of less than 1mm, at the bottom of the skin SST defined as a 500μm thick layer [13]. The SST below the bottom of the skin SST is usually measured by voluntary or research ships or vessels using a bucket. The representative depth of the SST is a few meters. The representative depth of climate SST dataset is generally close to the bucket SST. The vertical temperature structure in the upper ocean depends on maritime conditions [1213]. Strong solar radiation and weak wind lead to the formation of strong stratification in the skin layer during the day, producing a remarkable peak of diurnally-varying SST under maritime conditions. In contrast, strong wind disrupts the stratification and results in an oceanic mixed layer where sea water density is uniform. Around a TC, the mixed layer is relatively deep, particularly on the right side of the traveling direction [14].
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
102
Akiyoshi Wada
2.2. TCHP Tropical cyclone heat potential (TCHP) is a measure of the oceanic heat content from the surface to the 26°C-isotherm depth [15]. TCHP is defined as H
QTCHP = ∑ ρ h C p (Th − 26)ΔZ h
(1)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
z =0
where ρh is the density of the sea water at each layer, Cp is the specific heat capacity at constant pressure, Th is the sea temperature (ºC) at each layer, ΔZh is the thickness at each layer, H is the vertical level of depth corresponding to the 26ºC isotherm (hereafter Z26), and h is the variable number of vertical levels based on the configuration of the ocean data reanalysis system described in section 3. When Th is below 26ºC, TCHP at the layer is assumed to be zero. TCHP first appeared in the 1970s and was used as a large input of energy from the ocean for establishing and maintaining hurricane force winds over the sea [15]. TCHP was then used for ocean thermal forcing and was applied as one of the factors for measuring a TC genesis potential instead of SST [16]. In spite of the early studies, however, TCHP had not been used for TC-ocean interaction studies for nearly 20 years. This is partly because of the difficulty of observations for obtaining in situ oceanic data in tropical-subtropical oceans under strong wind conditions. In the 1990s, this situation was dramatically changed due to the advent and development of satellite observations. A conventional methodology for estimating TCHP using the European Remote Sensing Satellite-2 (ERS-2) and TOPEX/Poseidon satellite altimetry observations and a reduced-gravity ocean model has been developed [1718]. The TCHP products have been provided to public via an Internet Web page (http://www.aoml.noaa.gov/phod/cyclone/data/). Sea-surface height (SSH) and its anomaly (SSHA) are useful oceanic parameters for detecting warm or cold eddies in the ocean. Their distributions sometimes affect TC intensification in the Atlantic (ATL) [19-20] and in the western North Pacific (WNP) [21]. However, there is no common reference depth associated with SSH and SSHA among satellites and in situ sensors. When satellite data are merged together, we will carefully monitor temporal-spatial data correction among satellite data. In that sense, oceanic reanalysis data obtained from an ocean data assimilation system can statistically remove the temporalspatial ambiguity in that the system knows systematical errors about satellite data through the procedure for assimilating them into the system. Using TCHP data calculated from oceanic reanalysis data, the relationship between TCHP and TC intensity has been clarified on both an individual TC lifetime scale [22] and on seasonal to climate time scales in the WNP [23]. This chapter addresses recent progress in clarifying the relationship between TCHP and TC intensity in the WNP using this oceanic reanalysis data. It should be noted that a depth-averaged temperature may be more appropriate over a much wider range of oceanic conditions such as cool, open ocean waters; salt-stratified waters; and shallow coastal waters [24]. The effect of cool, open ocean waters on TCHP is not always reflected in the oceanic reanalysis data due to relatively coarse spatial and temporal resolution, which is introduced in section 3.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Climatology
103
3. Ocean Data Analysis System In general, oceanic reanalysis data are calculated by an ocean data assimilation system. Here, we use oceanic reanalysis data calculated by an ocean data assimilation system developed at the Meteorological Research Institute of the Japan Meteorological Agency. The Meteorological Research Institute multivariate ocean variational estimation system (MOVE) [25] consists of an ocean general circulation model and a multivariate three-dimensional variational analysis (3DVAR) scheme (Figure 1). The MOVE system uses the Meteorological Research Institute community ocean model (MRI.COM) as an ocean general circulation model [26]. The 3DVAR scheme is based on 12 representative modes derived from the vertically coupled temperature-salinity empirical orthogonal function (EOF) of a background error covariance matrix, explaining more than 85 per cent of the total variance [25]. The MOVE system has two stand-alone versions: the global and North-Pacific versions. Here, we use the North-Pacific version for creating oceanic reanalysis data from 1998 to 2007.
3.1. Model
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
MRI.COM is a multilevel ocean general circulation model that solves primitive equations under the assumptions of hydrostatic and Boussinesq approximations. A terrain-followingdepth hybrid coordinate system is applied as the vertical coordinate system in MRI.COM. A generalized scheme for preserving enstrophy [27] is applied together with the TakanoOonishi scheme [28-30] for calculating the advection of momentum. The Takano-Oonishi scheme can diagnose upward or downward mass momentum fluxes along a sloping bottom [28-30]. A no-slip condition is adopted for lateral boundaries. Bottom friction is parameterized based on Weatherly’s methodology [31].
Figure 1. Schematic diagram of the MOVE system design.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
104
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 2. Model domain of the North-Pacific version of the MOVE system. Subdomains indicate where the EOF analysis is made for each area.
Figure 2 illustrates the domain of the North-Pacific version of the MOVE system. The North-Pacific version of the MOVE system employs a longitude-latitude coordinate system with a horizontal grid spacing of 0.5°. There are a total of 54 vertical layers, including 24 layers above 200m. Smith-Sandwell’s bottom topography is used for model topography [32]. The surface-layer thickness is set to 1m, the bottom-layer thickness is set to 250m, and the maximum bottom depth is 5625m. The detailed specifications of the MRI.COM included in the North-Pacific version of the MOVE system are as follows. The isopycnal diffusive coefficient [33] is 1.0 x 102, the diapycnal diffusive coefficient is 1.0 x 10-4 , and the thickness of the diffusive coefficient in the isopycnal thickness diffusion [34] is 1.0 x 102. The background coefficient for vertical diffusion [35], the biharmonic viscosity [36], and Noh and Kim’s mixed-layer scheme [37] are used. A detailed description of MRI.COM can be found in the technical documents [26].
3.2. Assimilation Scheme The main procedure of variational analysis schemes in the MOVE system is a vertically coupled temperature-salinity EOF modal decomposition of a background error covariance matrix in a multivariate 3DVAR analysis scheme [38-39]. The amplitudes of the coupled EOF modes are used as control variables, and analyzed temperature and salinity are calculated by a linear combination of representative EOF modes. A cost function is defined as follows: J (w ) =
1 T −1 1 w B w + q(x) T R −1q(x) + C(x) , 2 2
(2)
where w is the vector of the control variables, B is the background error covariance matrix, and R is the observation error covariance matrix. It should be noted that B is a non-diagonal f matrix indicating horizontal correlations among background errors. The vector x = Gx + x is the state vector for temperature and salinity, where G denotes the transformation from the
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Climatology
105
control variable w to deviations of gridded temperature and salinity fields from the first-guess vector xf. The first term on the right-hand side of Eq. (2) is a constraint for background data, while the second term on the right-hand side of Eq. (2) is a constraint for observed data. The ith ingredient of vector q in the second term is
[q]i = l([h(x) − y ]i / σ i ) ,
(3)
where y is the observed data and σi is the ith standard observation error. The observation operator h includes the calculation of SSH and a horizontal interpolation when satellite altimetry data are available. The function l(v) indicates a variational quality control procedure defined as follows:
− ( a + b) / 2 (v < −b) ⎧ ⎪− {(v + b) 2 /( a − b) + a + b} / 2 (−b < v < −a) ⎪⎪ l (v ) = ⎨ (− a < v < a ) v ⎪ {(v − b) 2 /(a − b) + a + b} / 2 ( a < v < b) ⎪ ⎪⎩ (v > b ) ( a + b) / 2
(4)
where a = 1.5 and b=3 are parameters satisfying 0 70%) conditions in 2004. In 2000, an enhanced southwesterly monsoon is trapped west of the Philippines in the South China Sea where a monsoonal trough is located. Enhanced easterly winds bring dry air into the confluent zone where rapid intensification occurs. That is why the confluent zone is dry in spite of the high TCHP. In other words, a high TCHP does not always correspond to high relative humidity in the lower troposphere.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
114
Akiyoshi Wada
Figure 9. Composite maps of winds (m s-1), relative humidity (percent), and isobaric height at the 850hPa height. “Typhoon” symbols indicate where rapid intensification occurs. [From Wada and Usui 2007].
The monsoonal trough shifts eastward in 2004. Relative humidity is relatively high around the area where rapid intensification frequently occurs, and TCHP is relatively low. Easterly winds on the southern edge of a subtropical high are relatively weak and meander east of the area where rapid intensification frequently occurs. Wave-like perturbations seen in the easterly flow over the central Pacific have a wavelength of 2000km, a meridional wave number n=3, and a period of 5 to 10 days [74]. The perturbations slow down upon
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Climatology
115
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
approaching the monsoonal westerly flow. Other perturbations moving eastward in the monsoonal westerlies are trapped around the confluent zone at that time. This trapping leads to enhanced tropical cyclogenesis around the area where rapid intensification frequently occurs, although further atmospheric forcing is not required for rapid intensification beyond its interaction with a warm ocean [75]. The lower-tropospheric relative humidity is determined from the temperature and specific humidity. High relative humidity means that a low SST results in a low lower-tropospheric temperature, which subsequently leads to high relative humidity. In the other words, relative humidity is relatively low when the SST and lower-tropospheric temperature are high and the specific humidity hardly changes. This suggests that environmental dynamics, not environmental thermodynamics, plays a crucial role in rapid intensification. Environmental dynamics varied between 2000 and 2004, possibly related to the TCHP distribution.
Figure 10. Same as Figure 9 but for winds at the 200hPa height and vertical shear between 200hPa and 850hPa heights. [From Wada and Usui 2007].
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
116
Akiyoshi Wada
Composite maps of winds at a 200hPa height and the second power of vertical wind shear between 200hPa and 850hPa heights are presented for both 2000 and 2004 when rapid intensification frequently occurs [22] (Figure 10). Rapid intensification occurs under weak vertical wind shear and a large-scale divergent field at the 200hPa height. The area of relatively weak vertical shear is larger in 2000 than in 2004. In 2004, the amplitude of the vertical wind shear is relatively high north of 30°N due to relatively strong westerlies and along the southern edge of a subtropical high due to weak easterlies at the 850hPa height. In 2004, westerlies at the 200hPa height in the mid-latitude are zonally enhanced compared to those in 2000. In addition, north-easterlies at the 200hPa height around 5°N, 130°E and westerlies around 20°N, 170°W where vertical wind shear is relatively strong in the CP are enhanced in 2004. The area of weak vertical wind shear in 2004 is much smaller than that in 2000, resulting in a narrower divergent field at the 200hPa level in 2004. Interestingly, rapid intensification in 2004 occurs along the vertical-shear line, whereas rapid intensification is irrelevant to the vertical-shear line in 2000. Even though weak vertical wind shear is a crucial dynamics factor for TC formation [16] and is important for rapid intensification [22], lower-tropospheric dynamics around the confluent zone also plays an essential role in rapid intensification.
6. TCHP and TC Activity on a Climate Scale
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
6.1. Background On a weather-forecasting time scale, TCHP is related to TCA in that ATCHP is correlated with the minimum central pressure and the TCHP distribution may be linked to atmospheric environments. In addition, previous numerical studies suggested that in situ TCHP is highly correlated with TC intensity during the decay phase [22, 46, 70]. The next concern is the relationship between TCHP and TCA on a seasonal to climate time scale. Recent studies suggested a large increase in the number and proportion of intense TCs reaching categories 4 and 5 on the Saffir-Simpson scale and that the trend was due to both longer storm lifetimes and greater storm intensities associated with global warming in the North Atlantic and North Pacific [76-77]. A strong El Niño-Southern Oscillation (ENSO) signal is related to the mean location of TC formation [47], its mean life span, mean number of TC occurrences [48, 78], TC landfalling activity [79], and mean recurvature area of the TC track [80]. The index of ENSO has been used as a predictor in seasonal forecasts of TCA in various basins [81-83]. Whether variations in TCA are a part of the large interdecadal variability [84] or due to an increase in SST [85] has continued to be controversial. On a weather-forecasting time scale, TCHP can be regarded as a factor that explains TC intensity. In general, the passage of a TC results in SSC due to vertical mixing and Ekman pumping [69, 70], a decrease in sea-surface heat flux, and suppression of the development of TCs and thus TCHP decreases [46]. Quite a few studies have supported such a locally transient TC-ocean interaction. However, TC-ocean interaction on a climate time scale, particularly the features of TCHP, has been rarely studied due to the unavailability of a long historical oceanic dataset. An additional question is whether or not warming on a basin scale during the last 50 years [86] has affected TC activity. This section therefore describes climatological features of TCHP and investigates its relation to TCA in the North Pacific, particularly in the WNP.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Climatology
117
The Joint Typhoon Warning Center (JTWC) best-track data are used from 1961 to 2004 to obtain the six-hour best-track positions of TCs and their maximum sustained wind speeds over the North Pacific. We define TC-related parameters for TCs in the WNP as follows. The TC duration day (TCDAY) is the period during which the maximum wind speed is ≥ 34 knots. The typhoon period (maximum wind speed ≥ 64 knots) is called the typhoon-duration day (TYDAY). The period of a super typhoon (maximum wind speed ≥ 115 and SaffirSimpson categories 4 and 5) is called the super-typhoon-duration day (STYDAY). The numbers of TCs, typhoons, and super typhoons are TCNUM, TYNUM, and STYNUM. Accumulated cyclone energy [87] is calculated by summing the squares of the estimated maximum sustained wind speeds of every active tropical storm (maximum wind speed ≥ 34 knots: ACE), every active tropical storm until first reaching category 4 (maximum wind speed ≤ 115 knots: ACE4), and every active super typhoon (ACES), at six-hour intervals. The unit of ACE is 104 kt2. The power dissipation index [77] is calculated by summing the cubes of the estimated maximum sustained wind speeds of every active tropical storm (maximum wind speed ≥ 34 knots: PDI), every active tropical storm until first reaching category 4 (maximum wind speed ≤ 115 knots: PDI4), and every active super typhoon (PDIS), at six-hour intervals. The unit of PDI is 106 kt3.
6.2. TCHP Climatology To understand the relative magnitudes of TCHP variations between seasonal and interannual time scales, the ratio (r) of the interannual variations of TCHP to those on a seasonal time scale is introduced as
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
r=
(H (H
y ,m
y ,m
− Hm −H
) )
2
,
(5)
2
where Hy,m is the raw data of TCHP in the yth year and the mth month, H m is the monthly mean of TCHP of the mth month over 44 years from 1961 to 2004, and H is the 44-year mean TCHP. The overbar indicates the mean value of each summation for each year and month or both. Seasonal variations of TCHP are dominant, particularly in the low-latitude WNP (small value of r), but the ratio of interannual to seasonal TCHP variations is locally high around 10ºN, 130ºE to 140ºE (‘A’ in Figure 11a) like a footprint over the warm ocean. In contrast, the ratio of interannual to seasonal TCHP variations is nearly 1.0 in the equatorial eastern Pacific (EPA). The ratio is locally low around 10°N to 20°N, 100°W to 120°W in the EPA, indicating that the seasonal variations are remarkable around that area. The climatological distribution of TCHP over the Pacific exhibits two high TCHP areas: one around the tropical central Pacific and the other in the WNP (‘B’ and ‘C’ in Figure 11b), which is consistent with the TCHP distribution in Figure 8a. In contrast, TCHP is relatively low in the EPA. TCHP is locally high in the EPA where the ratio of interannual to seasonal TCHP variations is relatively low. TCs are generated around a high TCHP region in the WNP, which differs from TCs generated in the EPA (Figure 11c).
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
118
Akiyoshi Wada
Figure 11. Horizontal distributions of (a) the ratio of root mean square of TCHP anomaly deviating from the monthly mean TCHP for 44 years to that of the anomaly deviating from annual mean tropical cyclone heat potential for 44 years, (b) TCHP averaged from 1961 to 2004, (c) genesis locations from JTWC best-track data, (d) same as in Figure 11a except for Z26, and (e) same as in Figure 11b except for Z26, and (f) same as in Figure 11c except for the locations of TCs upon first reaching category 4 on the Saffir-Simpson scale. Subscripts A to F indicate a high ratio of TCHP (A), high TCHPs (B, C), high ratio of Z26 (D), and high Z26s (E, F). The square in Figure 11f indicates the area of rapid intensification [77]. [Reproduced from Wada and Chan 2008]
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Climatology
119
The ratio of interannual to seasonal Z26 variations is locally high around 10ºN, 130º to 140ºE (‘D’ in Figure 11d), where it is identified with the area where the ratio of interannual to seasonal Z26 variations is locally high. Of particular interest, this ratio is locally high east of the Philippines where Z26 is relatively shallow (Figure 11e), while Z26 is relatively deep around the tropical central Pacific (‘E’ in Figure 11e) and in the WNP (‘F’ in Figure 11e). In the EPA, Z26 is relatively shallow and the ratio of interannual to seasonal Z26 variations is nearly 1.0 in the equatorial EP, indicating that interannual Z26 variations are dominant around the area. The ratio of interannual to seasonal Z26 variations is locally low around 100°W to 120°W, 10°N to 20°N, almost identified with the area where Z26 is deep and TCHP is high. The footprint region seen in both TCHP and Z26 distributions is located in south of “the preferred region of rapid intensification” (8ºN to 20ºN, 125ºE to 155ºE, square box in Figure 11f) [71]. Upon reaching category 4, super typhoons tend to congregate where TCHP and Z26 gradients are steep in the WNP (Figs. 11b and 11e). However, hurricanes reaching category 4 in the EPA are not always located where TCHP is high and Z26 is deep. This suggests that the relationship between TCHP and the frequencies of tropical cyclone genesis and intense tropical cyclone upon reaching category 4 differs between WNP and EPA.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
6.3. TCHP Variations To examine representative patterns of TCHP variations on a climate scale in the North Pacific, intraseasonal variations included in the raw monthly TCHP data are excluded. After the execution of the exclusion procedure, an empirical orthogonal function (EOF) analysis of the 12-month running-mean monthly TCHP anomaly dataset is made. We obtain three representative modes for TCHP variations in the North Pacific from the EOF analysis. The three EOFs account for 38.5%, 23.0% and 11.8% of the total variance (Table 1). These EOFs are clearly distinct from one another based on the North et al.’ test [88]. The spatial pattern of the first mode features an east-west pattern with opposite signs, which apparently represents the ENSO signal (Figure 12a). In fact, the normalized amplitude of this mode is well correlated with the ENSO index (Table 1), defined by the SST anomaly (SSTA) averaged over the eastern Pacific region of (5˚N to 5˚S, 150 to 90˚W), with a correlation coefficient of 0.82. The correlation is significant at the 99% significance level based on the t-test. Table 1. Correlation Coefficients Between Normalized Loading Amplitude of Three Representative EOF Modes and Induces Associated with ENSO, ENSO-MODOKI, and PDO. Bolds with underlines indicate the 99% significance level and only underlines indicate the 95% significance level
EOF1(38.5%) EOF2(23.0%) EOF3(11.8%)
ENSO 0.94 -0.14 -0.10
ENSO-MODOKI 0.58 -0.44 0.55
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
PDO 0.58 -0.08 -0.38
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
120
Akiyoshi Wada
Figure 12. Upper panels present spatial patterns derived from the EOF analysis for TCHP and lower panels present the time series of 12-months running mean of normalized loading amplitude: (a) in the first mode and ENSO index, (b) in the second mode and EMI index, and (c) in the third mode and PDO index. In the upper panels, solid lines indicate positive correlation, and dashed lines indicate negative correlation. In the lower panels, solid lines represent a time series of 12-month running mean of normalized loading amplitude and dashed lines indicates time series of ENSO index, EMI, and PDO index. [Reproduced from Wada and Chan 2008].
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Climatology
121
The second mode features a pattern flanked by a pattern of the opposite sign in the EPA (Figure 12b), which is similar to a unique tripolar pattern called “ENSO Modoki.” “Modoki” is a classical Japanese word, meaning “a similar but different things” [89]. The ENSOModoki index (EMI) is defined as
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
EMI = [ SSTA]a − 0.5 × [ SSTA]b − 0.5[ SSTA]c ,
(6)
where region a covers 10ºS to 10ºN, 165ºE to 140ºW, region b covers 15ºS to 5ºN, 110ºW to 70ºW, and region c covers 10ºS to 20ºN, 125ºE to 145ºW [89]. ENSO Modoki involves ocean-atmosphere coupled processes including a unique tripolar sea-level pressure pattern during the evolution [89]. These first two EOF modes are often analyzed [90]. A downward trend is seen in the second EOF mode (Figure 12b), but both the slope and the intercept are insignificant based on the t-test. A downward trend is also seen in the third EOF mode (Figure 12c), and neither the slope nor the intercept is significant. However, unlike the downward trend in the second mode, the trend in the third mode is strongly affected by the normalized amplitude in 1983 and 1998 (Figure 12c). In 1983 and 1998, TCNUM and TYNUM in the WNP are actually low. Even though the values of the slope and the intercept are not significant, the variations of the third mode in 1983 and 1998 differentiate this mode from the other EOF modes. The spatial pattern of this mode has the same sign in the WNP and eastern equatorial Pacific and an opposite sign in the central Pacific region. The normalized amplitude of this mode is correlated with TCNUM and TYNUM in the WNP (Table 2). These correlations are significant at the 99% significance level based on the t-test, but the amplitudes are not significantly correlated with the SSTA in the NINO3 region. In contrast, TC activity in the EPA is well correlated with the SSTA in the NINO3 region (Table 3). This implies that the occurrence of central Pacific warming, WNP cooling, and equatorial eastern Pacific cooling is independent of ENSO and is accompanied with an increase in the total number of TCs, which differs from the relationship between ENSO and TCA in the EPA. This suggests a locally transient TC-ocean interaction caused by the passage of TCs. The WNP cooling pattern is probably caused by the superposition of frequent passage of TCs. It should be noted that the third EOF mode may be associated with Pacific Decadal oscillation (PDO). The PDO index is derived as the leading principal component of monthly SSTA in the North Pacific, poleward of 20ºN. The association suggests that TCNUM and TYNUM in the WNP may vary with a decadal period. A lag-correlation analysis of normalized amplitude between the first and third modes demonstrates that the third mode precedes the first mode at a lag of 12 months. The correlation (0.42) is significant at the 99% significance level based on the t-test. This suggests that TC activity in the WNP may play an active role in the development of ENSO events [91]. Table 2 also indicates that STYNUM has a positive and significant correlation with the first EOF (the El Niño mode), but no relationship exists between the EOF third mode and STYNUM. This indicates that the relationship between TC activity and climatological TCHP differs among TC phases. The difference in correlations and their significance is also seen in duration, ACE, and PDI. The third mode is significantly correlated with TCDAY, TYDAY, ACE, ACE4, PDI, and PDI4 (Table 2). In addition, the first mode is correlated with STYDAY, ACES, and PDIS at the 95% significance level based on the t-test (Table 2). The
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
122
Akiyoshi Wada
increase in TCDAY, ACE, and PDI during the El Niño is consistent with the previous studies [92-93].
EOF1(38.5%) EOF2(23.0%) EOF3(11.8%)
PDIS
PDI4
PDI
ACES
ACE4
ACE
STYDAY
TYDAY
TCDAY
STYNUM
TYNUM
TCNUM
Table 2. Correlation Coefficients Between Normalized Loading Amplitude of Three Representative EOF Modes and Induces Associated with TCA in the WNP. Bolds with underlines indicate the 99% significance level and only underlines indicate the 95% significance level
0.02 0.20 0.36 0.47 0.43 0.40 0.46 0.40 0.36 0.43 0.39 0.34 0.04 0.04 -0.09 -0.05 -0.07 -0.14 -0.09 -0.07 -0.10 -0.09 -0.09 -0.07 0.45 0.55 0.29 0.40 0.35 0.27 0.41 0.45 0.28 0.39 0.47 0.29
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
TYNUM
STYNUM
TCDAY
TYDAY
STYDAY
ACE
ACE4
ACES
PDI
PDI4
PDIS
EOF1(38.5%) EOF2(23.0%) EOF3(11.8%)
TCNUM
Table 3. Same as Table 2 but for Index associated with TCA in the EPA
0.18 0.10 -0.25
0.27 0.10 -0.32
0.46 0.06 -0.37
0.30 0.20 -0.27
0.33 0.18 -0.36
0.53 -0.03 -0.31
0.40 0.14 -0.35
0.41 0.13 -0.38
0.53 -0.04 -0.29
0.44 0.11 -0.37
0.45 0.10 -0.38
0.52 -0.04 -0.28
The relationship between the ENSO and super typhoons is explored from the perspective of TCHP variation. The years of El Niño (La Niña) events are determined using the criterion of an annual average of NINO3 SSTA being higher than 0.5ºC (lower than -0.5ºC ). Based on these criteria, the El Niño years are 1963, 1965, 1969, 1972, 1982, 1983, 1987, 1991, 1992, 1997, and 2002, while the La Niña years are 1964, 1971, 1975, 1985, 1988, 1989, 1999, and 2000. Climatologically, super typhoons are generated from May to December with a peak from August to October. A 44-year average of TCHP at a TC position on the track of a super typhoon is 77.9kJ cm-2, while the average TCHP during the El Niño events is 74.4kJ cm-2 and the average during the La Niña is 78.4kJ cm-2. During an El Niño year, the averages of STYDAY [40.0 days], ACES [116.8·104 kt2], and PDIS [112.1·106 kt3] exceed the 44-year averages of STYDAY [29.4 days], ACES [108.5·104 kt2], and PDIS [96.7·106 kt3]. In contrast, the averages of STYDAY [19.0 days], ACES [94.7·104 kt2], and PDIS [81.0·106 kt3] are less than the 44-year averages during a La Niña year. Therefore, super-typhoon formation is not directly related to TCHP at a TC position. ATCHP [22] is roughly defined as the product of TCHP and STYDAY. A 44-year average product is 2.3MJ cm-2·day. The product in the El Niño event is 3.0MJ cm-2·day, while the product in the La Niña event is 1.5MJ cm-2·day. Therefore, a long duration of TC over the warm ocean is essential for super-typhoon formation due to an increase in ATCHP, consistent with a previous study [94]. In fact, specific track types related to ENSO [80] and southeastward shift of TC genesis location [47-48] cause long duration, and thus an increase in ATCHP.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Climatology
123
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
WNP cooling or warming around 8ºN to 20ºN, 120ºE to 150ºE is possibly related to the subsequent or deficient passage of TCs. In fact, TCHP anomalies are positive when TYNUM decreases and vice versa. Around 8 º N to 20 º N and 120 º E to 150 º E, the upper ocean loses TCHP due to enhanced TCA (Figure 13). Assuming that the difference in the TCHP anomaly between large TCNUM and small one is nearly 4kJ cm-2, the difference in Z26 is nearly 0.32cm, and that in density is 1.023g cm-3, the difference in temperature derived from equation (1) is nearly 2.9ºC. Indeed, the amplitude of Z26 variation is diminutive compared with that of TCHP. Because of the small percentage of variance in the third mode (11.8%) compared with that in the first and second modes, a decrease in SST due to TC activity is estimated to be 0.34ºC at most. It should be noted that both the horizontal resolution of the North-Pacific version of the MOVE system and that of atmospheric forcing are relatively coarse. More realistic atmospheric forcing and finer horizontal resolution would enable us to reproduce the SSC induced by TCs in the WNP [95-96].
Figure 13. Example of a decrease in TCHP due to enhanced TC activity. (a) Horizontal distribution of the difference in TCHP between February and March 2004. No TCs exist in the WNP. (b) Same as (a) except for the difference between August and September 2004. Green circles indicate TC positions in August; red circles indicate those in September. [From Wada and Chan 2008].
6.4. TCHP and Atmospheric Response on a Seasonal Scale Tropical cyclone (TC) tracks are usually influenced by large-scale atmospheric environmental conditions such as the monsoonal trough or subtropical ridge [97], Madden-
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
124
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Julian Oscillation (MJO) [98], stratospheric quasi-biennial oscillation [99], tropical uppertropospheric troughs [49], and ENSO [48, 80]. These atmospheric environmental conditions are understood as a part of an air-sea coupled system. In particular, the ENSO dynamic includes upper-ocean dynamic and thermodynamics. In that sense, TCHP is regarded as a metric of oceanic thermal forcing for affecting large-scale atmospheric environmental conditions and TC tracks. Previous studies have reported that TC formation tended to be enhanced in the southeastern part of the WNP [48, 80] and in the western part of the EPA [100] during El Niño years. However, the principal component in EOF3 derived from the EOF analysis of TCHP in the North Pacific for 44 years significantly correlated with TC numbers, TC duration, ACE, and PDI in the WNP, not that in EOF1 [23]. Here we define unusual TCHP-anomaly years from the results of EOF3 normalized amplitudes [19]. The unusual TCHP-anomaly years are considered to be closely related to TCA. After eliminating a negative trend from 1961 to 2004, the standard deviation σ is found to be 0.97. There are 12 positive years (hereafter PTCA: 1961, 1967, 1976, 1986, 1990, 1991, 1994, 1995, 1996, 2002, 2003, and 2004) in which the normalized amplitude for the EOF third mode is > 0.5σ (nearly 0.48) and eight negative years (hereafter NTCA: 1970, 1973, 1983, 1984, 1987, 1988, 1998, and 1999) in which this amplitude is < -0.5σ. Figure 14 depicts a schematic diagram for the time series of these events. A cycle regularly appears among PTCA, El Niño, NTCA, and La Niña events at least until 2000 when the cycle seems to reverse due to La Niña events subsequently occurring in 2005/2006 and 2007/2008 after the TCA event in 2004.
Figure 14. Schematic diagram of the years of positive and negative TCA and El Niño and La Niña event occurrence.
The previous section indicates that the 44-year mean TCHP map over the Pacific from 1961 to 2004 exhibits two high TCHP areas: Equatorial CP and WNP. Figure 15 presents composite maps for TCHP, Z26 and TC positions in the WNP in the summer season (July, August and September (JAS)) in the PTCA (Figure 15a) and NTCA (Figure 15b) years. The composite maps for the number of TCs and intense TC positions are made by counting these
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Climatology
125
numbers every six hours for each 5°x5°grid [73]. Intense TCs are defined as TCs reaching category 4 on the Saffir-Simpson scale. In the PTCA years, TCHP is highest and Z26 is deepest in the tropical CP (Figure 15a). In the NTCA years, TCHP and Z26 become the highest and deepest in the WNP (Figure 15b). A difference in the composition maps of TC and intense typhoon positions is seen between PTCA and NTCA years. The mean position for TCs is 18.6°N, 137.2°E in the PTCA years, but 19.2°N, 132.2°E in the NTCA years. Since the mean position is 18.3°N, 139.5°E in the El Niño years and 20.0°N, 132.8°E in the La Niña years, the northwest-southeast shift of TC tracks during the ENSO years [47-48, 80] occurs even in the PTCA/NTCA years.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
(a)
Figure 15. Continued on next page.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
126
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
(b)
Figure 15. Composite maps of the horizontal distribution of mean TCHP (top), Z26 (middle), and TC positions (bottom) in (a) positive TCA years and (b) negative TCA years. Contours represent the number of TC positions counted every six hours for each 5°x5° box, and shading indicates the number of intense TC positions. Intense TCs are defined as those rated 4 or 5 on the Saffir-Simpson hurricane scale.
In fact, the PTCA years lead to more TC formations, longer durations, and higher ACE and PDI, while the NTCA years lead to fewer TC formations, shorter durations, and lower ACE and PDI. These features are consistent with the feature of El Niño and La Niña years. During the PTCA years, TCs are distributed widely in the WNP, extending to the southeastward, and their frequency and the frequency of intense TCs are high around 15°N, 130°E (Figure 15a). Here we address the corresponding atmospheric patterns for PTCA and NTCA years. Figure 16 presents the composite maps for 500hPa height, 850hPa specific humidity, and 850hPa zonal wind in JAS for each PTCA (Figure 16a) and NTCA (Figure 16b) year. The National Center for Environmental Prediction-National Center for Atmospheric Research
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Climatology
127
(NCEP/NCAR) monthly atmospheric reanalysis dataset [101] is used for making each composite map. Because the variations of atmospheric large-scale atmospheric environments are more transient than those of oceanic environments, the occurrence and sustenance of active convection such as enhanced MJO phase and amplitude in the tropics [102] are considered to be easily influenced by the location of high TCHP areas. How is the occasional atmospheric onset of stationary convective phase associated with high TCHP and how does the onset affect the TC trajectories through the variations of large-scale atmospheric and oceanic environments? Here we focus on differences in mean atmospheric environments between the PTCA and NTCA years.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
(a)
Figure 16. Continued on next page.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
128
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
(b)
Figure 16. Composite maps of the horizontal distribution of 500-hPa height (top, unit: gpm), 850-hPa specific humidity (middle, unit: g kg-1), and zonal wind speed (bottom, unit: ms-1) in (a) positive TCA years and (b) negative TCA years.
In the PTCA years, high TCHP and deep Z26 in the CP (Figure 15a) invoke strong convection. The strong convection forces lower-troposphere winds to be westerly so that westerly winds accompanied by cyclonic anomalies extend to the equatorial CP around 160°E (Figure 16a). Strong convection also gives rise to Rossby-wave type dispersion. The eastward extension of westerly winds and Rossby-wave type dispersion lead to tropical cyclogenesis northwest of the strongest convection, resulting in the formation of TCs in the south-eastern part of the WNP [103] (Figure 15a). The composite map of 500hPa height indicates that the western edge of the contour of 5870m is over the East China Sea (Figure 16a). Both the location of TC formation and the pattern of 500hPa height are responsible for the features of TC trajectories, particularly the recurvature of TCs (Figure 15a).
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Tropical Cyclone-Ocean Interaction: Climatology
129
In the NTCA years, TCHP is highest and Z26 is deepest around 10ºN to 20ºN, 120ºE to 150ºE compared to those in the PTCA years. Figure 16b shows that westerly winds hardly extend eastward. Because strong convection forms much closer to the Philippines, Rossbywave type dispersion hardly occurs in the WNP. The western shift of strong convection is responsible for low specific humidity around 10°N, 140°E to 160°E, which is not favorable for TC formation even though TCHP is relatively high in the WNP. Therefore, TCs are formed further northwest. The composite map of 500hPa height indicates that the western edge of the contour of 5870m shifts more westward than that in the PTCA years. This reveals that TCs tend to move more westward, resulting in more landfalls in South China and the Philippines and fewer in Japan and Korea. The large-scale atmospheric environmental pattern in the NTCA years is analogous to the La Niña-type Cluster D presented in the previous study [80].
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
7. Conclusion This chapter describes the impact of ocean energy on tropical cyclone activity in the western North Pacific (WNP) on a seasonal to climate time scale. Ocean thermal forcing is necessary for forming and developing tropical cyclones (TCs). In addition, ocean thermal forcing can control TC intensity during the decay phase. TC intensity upon reaching minimum central pressure is not related to the underlying seasurface temperature (SST) but to the accumulated SST from the formation upon first reaching the mature phase. Tropical cyclone heat potential (TCHP), a measure of the oceanic heat content from the surface to the 26°C-isotherm depth, is introduced for investigating the relationship between the upper-ocean thermal forcing and TC intensity. Accumulated TCHP estimated from 10-day mean TCHP oceanic reanalysis data is correlated with TC intensity as well as the accumulated isotherm depth of 26ºC (Z26) estimated from 10-day mean TCHP oceanic reanalysis data, TC duration, and accumulated SST estimated from three-day mean satellite observation data. Even though the temporal resolutions of TCHP and Z26 datasets are coarser than that of satellite SST, the result suggests that the upper-ocean temperature and salinity profiles are important as ocean thermal forcing for TC intensification. From a climate point of view, TCHP is the highest in the Southern Hemisphere central Pacific. TCHP is locally high in the WNP, corresponding to the area where a number of intense TCs reach category 4 on the Saffir-Simpson scale. The EOF analysis of TCHP for 44 years indicates three significant modes: El Niño Southern oscillation (ENSO), ENSOModoki, and central Pacific warming or cooling events associated with the Pacific Decadal oscillation. The central Pacific warming or cooling event is closely related to TC activity in the WNP and exhibits features similar to the ENSO-TC activity relationship such as a northwestern-southeastern shift of the location of TC formation, and a difference in TC intensity and duration between El Niño and La Niña years. A difference in TC activities between warm and cool events in the central Pacific is consistent with a difference in the atmospheric environments between them. However, high in situ TCHP is not always related to enhanced TC activity (TCA). This chapter reaches the conclusion that environmental dynamics, not environmental thermodynamics, plays a crucial role in rapid intensification in the WNP. This implies that artificial and arbitrary changes in local oceanic conditions do not directly impact TCA, even
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
130
Akiyoshi Wada
though each TC is influenced by the changes. Rather, a serious problem may arise for TCA if the atmospheric dynamics conditions do change due to the artificial and arbitrary changes in local oceanic conditions.
References [1] [2] [3] [4] [5] [6]
[7] [8] [9] [10] [11] [12] [13] [14]
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
[15] [16]
[17] [18] [19] [20] [21] [22] [23] [24] [25] [26]
[27]
Palmén, EH. Geophysica, 1948, 3, 26-38. Emanuel, KA. J. Atmos. Sci., 1995, 52, 3969-3976. DeMaria, M; Kaplan, J. Wea. Forecasting, 1994, 9, 209-220. DeMaria, M; Kaplan, J. Wea. Forecasting, 1999, 14, 326-337. DeMaria, M; Michelle, M; Shay, LK; Knaff, JA; Kaplan, J. Forecasting, 2005, 20, 531-543. Kantha, LH; Clayson, CA. Small Scale Processes in Geophysical Fluid dynamics; INTERNATIONAL GEOPHYSICS SERIES; ACADEMIC PRESS: 525 B Street Suite 1900, San Diego, California, 92101-4495, 2000, 67, 888. Scharroo, R; Smith, WHF; Lillibridge, J. L., EOS Trans. AGU, 2005, 86, 366. Sun, D; Gautam, R; Cervone. G; Kafatos, M. EOS Trans., AGU 2006, 87, 89-90. Scharroo, R., EOS Trans., AGU 2006, 87, 90. Cione, JJ; Uhlhorn, EW. Mon. Wea. Rev., 2003, 131, 1783-1796. Sriver, RL; Huber, M. Nature, 2007, 447, 577-580. Kawai, Y; Wada, A. J. Oceanogr., 2007, 63, 721-744. Donlon, C; Coauthers Bull. Am. Meteorol. Soc., 2007, 88, 1197-1213. Ginis, I. In Global Perspective on tropical cyclones; RL; Elsberry, Ed; 1995, 693, 198260. Leipper, DF; Volgenau, D. J. Phys. Oceanogr., 1972, 2, 218-224. Gray, M. In Meteorology over the tropical Oceans, Shaw, D. B; Roy. Meteor. Soc., James Glaisher House, Grenville Place, Bracknell, Berkshire, RG12 1BX, 1979; 155288. Shay, LK; Goni, GJ; Black, PG. Mon. Wea. Rev., 2000, 128, 1366-1383. Goni, GJ; Trinanes, JA. EOS Trans., AGU, 2003, 84, 573, 577-578. Bao, JW; Wilczak, JM; Choi, JK; Kantha, LH. Mon. Wea. Rev., 2000, 128, 2190-2210. Hong, W; Chang, SW; Raman, S; Shay, LK; Hodur, R. Mon. Wea. Rev., 2000, 128, 1347-1365. Lin, II; Wu CC; Emanuel, KA; Lee IH; Wu, CR; Pun IF. Mon Wea Rev., 2005, 133, 2635-2649. Wada, A; Usui, N. J. Oceanogr., 2007, 63, 427-447. Wada, A; Chan, JCL. Geophy. Res. Lett., 2008, 35, L17603. Price, JF. Ocean Sci. Discuss., 2009, 6, 909-951. Usui, N; Ishizaki, S; Fujii, Y; Tsujino, H; Yasuda, T; Kamachi, M. Adv. Space Res. 2006, 37, 806-822. Ishikawa, I; Tsujino, H; Hirabara, M; Nakano, H; Yasuda, T; Ishizaki, H. Meteorological Research Institute Community Ocean Model (MRI.COM) manual. Technical reports of the Meteorological Research Institute 2005, 47, 189. (in Japanese) Arakawa, A. J. Comput. Phys., 1966, 32, 2299-2311.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Tropical Cyclone-Ocean Interaction: Climatology
131
[28] Takano, K. In Oceanography as Environmental Science; Horibe, S; Ed; Tokyo University Press, Tokyo 1978, 27-44. (in Japanese) [29] Oonishi, Y. In Oceanography as Environmental Science; Horibe, S; Ed; Tokyo University Press, Tokyo 1978, 246-271. (in Japanese). [30] Ishizaki, H; Motoi, T. J. Atmos. Oceanic. Technol., 1999, 16, 1994-2010. [31] Whetherly, GL. J. Phys. Oceanogr., 1972, 2, 54-72. [32] Smith, RD; Sandwell, DT. Science, 1997, 277, 1956-1962. [33] Redi, MH. J. Phys. Oceanogr., 1982, 12, 1154-1158. [34] Gent, PR; McWilliams, JC. J. Phys. Oceanogr., 1990, 20, 150-155. [35] Tsujino, H; Hasumi, H; Suginohara, N. J. Phys. Oceanogr., 2000, 30, 2853-2865. [36] Smagorinsky, J. Mon. Wea. Rev., 1963, 91, 99-164. [37] Noh, Y; Kim, HJ. J. Geophys. Res., 1999, 104, 15621-15634. [38] Fujii, Y; Kamachi, M. J. Geophys. Res., 2003, 108, 3297. [39] Fujii, Y. J. Oceanogr., 2005, 61, 655-662. [40] Fujii, Y; Kamachi, M. Tellus A, 2003, 55, 450-454. [41] Conkright, ME; Coauthors In World Ocean Datebase, 2001, Levitus, S; Ed; NOAA Atlas NESDIS 42, U. S. Government Printing Office, Washington, DC, USA, 2002, 1, pp1-167. [42] Kuragano, T; Kamachi, M. J. Geophys. Res., 2000, 105, 955-974. [43] Bloom, SC; Takacs, LL; Da Silva, AM; Ledvina, D. Mon. Wea. Rev., 1996, 124, 12561271. [44] Anthes, RA. Tropical Cyclones their evolution, structure and effects. Meteorological Monographs 41: American Meteorological Society:Bostonm MA, 1982, Vol.19, 208. [45] Zehr, RM. Tropical cyclogenesis in the western North Pacific, NOAA Tech. Report NESDIS-61. US Department of Commerce, Washington, DC, 181. [46] Wada, A. J. Geophys. Res., 2009, 114, D18111. [47] Chia, HH; Ropelewski, CF. J. Climate, 2002, 15, 2934-2944. [48] Wang, B; Chan, JCL. J. Climate, 2002, 15, 1643-1658. [49] Sadler, JC. Mon. Wea. Rev., 1978, 115, 1606-1626. [50] Wang, Y; Wu, CC. Meteorol. Atmos. Phys., 2004, 87, 257-278. [51] Kleinschmidt, E. Arch. Meteorol. Geophys. Bioklimatol., 1951, A4, 53-72. [52] Miller, BI. J. Meteorol., 1958, 15, 184-195. [53] Malkus, JS; Riehl, H. Tellus, 1960, 12, 1-20. [54] Emanuel, KA. J. Atmos. Sci., 1986, 43, 585-604. [55] Emanuel, KA. J. Atmos. Sci., 1988, 45, 1143-1155. [56] Holland, GJ. J. Atmos. Sci., 1997, 54, 2519-2541. [57] Camp, JP; Montgomery, MT. Mon. Wea. Rev., 2001, 129, 1704-1717. [58] Shen, W. Q. J. R. Meteorol. Soc., 2004, 130, 2629-2648. [59] Shen, W. Q. J. R. Meteorol. Soc., 2004, 131, 2629-2648. [60] Baik, JJ; Paek, JS. J. Meteor. Soc., Japan 1998, 76, 129-137. [61] DeMaria, M; Kaplan, J. J. Climate, 1994, 7, 1324-1334. [62] Whitney, LD; Hobgood, JS. J. Climate, 1997, 10, 2921-2930. [63] Atkinson, GD; Holliday, CR. Mon. Wea. Rev., 1977, 105, 421-427. [64] Knaff, JA; Zehr, RM. Wea. Forecasting, 2006, 18, 1093-1108. [65] Evans, JL. J. Climate, 1993, 6, 1133-1140. [66] Kamahori, H; Yamazaki, N; Mannoji, N; Takahashi, K. SOLA, 2006, 2, 104-107.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
132
Akiyoshi Wada
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
[67] [68] [69] [70] [71] [72] [73]
Wu, MC; Yeung, KH; Chang, WL. Eos. Trans. Am. Geophys. Union, 2006, 87, 537. Nakazawa, T; Hoshino, S. SOLA, 2009, 5, 33-36. Price, JF. J. Phys. Oceanogr., 1981, 11, 153-175. Wada, A. Pap. Meteorol. Geophys., 2002, 52, 31-66. Wang, B; Zhou, X. Meteorol. Atmos. Phys., 2008, 99, 1-16. Kaplan, J; DeMaria, M. Wea. Forecasting, 2003, 18, 1093-1108. Kanamitsu, M; Ebisuzaki, W; Woolen, J; Yang, SK; Hnilo, JJ; Fiorino, M; Potter, GL. Bull. Am. Meteorol. Soc., 2002, 83, 1631-1643. [74] Reed, RJ; Recker, EE. J. Atmos. Sci., 1971, 28, 1117-1133. [75] Briegel, LM; Frank, WM. Mon. Wea. Rev., 2000, 128, 917-945. [76] Webster, PJ; Holland, GJ; Curry, JA; Chang, HR. Science, 2005, 309, 1844-1846. [77] Emanuel, KA. Nature, 2005, 436, 686-688. [78] Camargo, SJ; Sobel, A. H. J. Climate, 2005, 18, 2996-3006. [79] Wu, MC; Chang, WL; Leung, WM. J. Climate, 2004, 17, 1419-1428. [80] Camargo, SJ; Robertson, AW; Gaffney, SJ; Smyth, P; Ghil, M. J. Climate, 2007, 20, 3654-3676. [81] Chan, JCL; Shi, JE; Lam, CM. Wea. Forecasting, 1998, 13, 997-1004. [82] Chan, JCL; Shi, JE; Liu, KS. Wea. Forecasting, 2001, 16, 491-498. [83] Camargo, SJ; Barnston, AG; Klotzbach, PJ; Landsea, CW. WMO Bull., 2007, 56, 297309. [84] Chan, JCL. Science, 2006, 311, 1713. [85] Webster, PJ; Curry, JA; Liu, J; Holland, GJ. Science, 2006, 311, 1713. [86] Levitus, S; Antonov, J; Boyer, T. Geophys. Res. Lett., 2005, 32, L02604. [87] Bell, GD; Coauthors Bull. Amer. Meteor. Soc., 2000, 81, S1-S50. [88] North, GH; Bell, TL; Cahalan, RF; Moeng, FJ. Mon. Wea. Rev., 1982, 110, 699-706. [89] Ashok, K; Behara, SK; Rao, SA; Weng, H; Yamagata, T. J. Geophys. Res., 2007, 112, C11007. [90] Levitus, S; Antonov, JI; Boyer, TP; Garcia, HE; Locarnini, RA. Geophys. Res. Lett., 2005, 32, L18607. [91] Sobel, AH; Camargo, SJ. J. Atmos. Sci., 2005, 62, 3396-3407. [92] Camargo, SJ; Sobel, AH. J. Climate, 2005, 18, 2996-3006. [93] Chan, JCL. Tellus A, 2007, 59, 455-460. [94] Chan, JCL. Proc. Royal. Soc., A 2008, 464, 249-272. [95] Wada. A; J. Oceanogr, 2005, 61, 41-57. [96] Wada, A; Niino, H; Nakano, H. J. Oceanogr., 2009, 65, 373-396. [97] Harr, PA; Elsberry, RL. Mon. Wea. Rev., 1995, 123, 1225-1246. [98] Liebmann, B; Hendon, HH; Glick, JD. J. Meteor. Soc. Japan, 1994, 72, 401-412. [99] Chan, JCL. Mon. Wea. Rev., 1995, 123, 2567-2571. [100] Irwin, RP; Davis, R. Geophys. Res. Lett., 1999, 26, 2251-2254. [101] Kalnay, E; Coauthors Bull. Am. Meteorol. Soc., 1996, 77, 431-471. [102] Wheeler, MC; Hendon, HH. Mon. Wea. Rev., 2004, 132, 1917-1932. [103] Ritchie, EA; Holland, GJ. Mon. Wea. Rev., 1999, 127, 2027-2043.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
In: Advances in Energy Research, Volume 1 Editor: Morena J. Acosta, pp. 133-182
ISBN: 978-1-61668-994-0 © 2010 Nova Science Publishers, Inc.
Chapter 4
SUNLIGHT AND SKYLIGHT AVAILABILITY Stanislav Darula and Richard Kittler Slovak Academy of Sciences, Bratislava, Slovakia
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Abstract Solar radiation as a primary daytime source of energy and daylighting needs to be specified for practical purposes. Extraterrestrial parallel beam irradiance and illuminance defined by the Solar Constant and Luminous Solar Constant serve as a world-wide representation of maximal availability reaching the Earth. Their time-corrected horizontal values can serve as momentary normalizing levels for sunlight and skylight availability in any location specifying daily or yearly changes. Direct, parallel sun-beam illuminance at ground level is reduced due to transmittance losses, scattering and reflection caused by atmospheric content, e.g. turbidity/aerosol, pollution and cloudiness. These effects will be defined by influential broadband parameters and illuminance measurements. Atmospheric scattering approximated by indicatrix and gradation functions based on measurements by sky luminance scanners were analyzed for the application to quasi-homogeneous sky models with typical luminance patterns occurring world-wide which were already adopted in the ISO/CIE standard. Diffuse and global illuminance levels were studied under different daily, seasonal and yearly courses with examples of local regular measurements gathered at the Bratislava CIE IDMP station. Measured data were analyzed using parameters and methods suitable for availability evaluations. The intention is to stimulate simple regular illuminance/luminance recording at meteorological stations to document local daylight availabilities. Examples analyzing daylight climate in different regions were documented with the aim to define local sunlight and skylight availability changes or territorial distribution applying sunshine duration data. Recommended descriptors and determining influences on daylight climate, their interrelations and approximation formulae for computer studies are presented. Possible graphical representation of real situations and their actual changes are shown in this chapter.
4.1. Extraterrestrial Illuminance as the Availability Criterion In spite of frequent eruptions and protuberances, the spherical sun surface radiates into the vast universe space a relatively steady flow of electromagnetic radiation. As a huge and
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
134
Stanislav Darula and Richard Kittler
permanent atomic furnace with a high surface temperature around 6000 K, the sun is an immense primary source of energy. Our Earth being on its almost circular orbit has the mean distance from the sun 149.5 million km on the 3rd April and 5th October while reaching the minimal distance 147 Mkm on the 3rd January and maximum 152 Mkm on the 4th July. The extraterrestrial solar radiation spectrum was recently defined by Gueymard, 2004 with the so-called Solar Constant SC = 1366.1 W/m2 which is the solar radiance on a fictitious plane normal to sun beams placed at the outer border of the atmosphere when the Earth has a mean distance from the sun. Under the same assumptions was calculated the Luminous Solar Constant ( LSC ) integrating the solar visible spectrum (380 to 780 nm) with the standard monochromatic sensitivity of a human eye V (λ ) and the resulting LSC = 133 334 lx was recommended for practice (Darula et al., 2005). However, because a dual definition of the human eye sensitivity to monochromatic radiation exists in photometry and colorimetry (CIE, 1990), besides V (λ )
also a modified VM (λ ) can be taken into account and thus a LSCM = 134 108 lx can be
used sometimes. Therefore also a reasonably accurate average 133.8 klx was recommended and adopted in few CIE documents (e.g., CIE, 1994). Due to orbit changes, both these constants have to be corrected for any particular day with a J number starting from 1st January ( J = 1) to 31st December (either J = 365 or 366 in a leap year). Then a simple approximation formula for the so-called eccentricity factor ∋ can be applied, (after IES, 1984)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
⎡ 360° (J − 2°)⎤⎥ ∋ = 1 + 0.034 cos ⎢ ⎣ 365 ⎦
(4.1)
Both SC and LSC constants are valid extraterrestrially world-wide and represent the maximum irradiance and illuminance quantities that often serve as the upper border of availability. Thanks to their stable values, these are applied also for the quality control of regular measurements at ground level. More important locally are the daily variations of variously sloped or differently oriented parallel sun beams and their corresponding extraterrestrial horizontal levels either in the total solar Ee or in the visible spectrum EV , which are
where
Ee = SC ∋ (sin γ S ) [W/m2]
(4.2)
EV = LSC ∋ (sin γ S ) [lx]
(4.3)
γ S is solar altitude, sometimes called also sun height which is time and place
dependent as
sin γ S = sin ϕ sin δ − cosϕ cos δ cos(15° H ) where
ϕ is the local geographical latitude,
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
(4.4)
Sunlight and Skylight Availability
135
H – number of the day hour in true solar time TST from midnight H = 0,
δ – solar declination which can be approximated after Smith and Wilson, 1976 ⎡ 360° (J − 81)⎤⎥ [°] 365 ⎦ ⎣
δ = 23.45° sin ⎢
(4.5)
The extraterrestrial horizontal illuminance EV in lx and in true solar time and at a particular locale is in fact expressing also the luminous flux in lm/m2 reaching the outer border of the globe atmosphere in the exclusive form of parallel sun beams. The atmosphere like a huge sphere layer covers and protects the globe, and due to sun beam direction determined by angular solar altitude and azimuth, filters and scatters these beams.
4.2. Parallel Sun-Beam Illuminance and the Ground-Level Availability In equatorial regions during noon time, sunlight penetrates the thinnest optical air mass (m) which in relative terms has the unity value, i.e., m = 1 and after the assumption of an absolutely clear and clean, now so-called Rayleigh, 1899, atmosphere its overall transmittance was expressed by the Bouguer´s, 1760 exponential law. Thus the sunlight at ground level on the equator when sun is in zenith, then •
on the plane normal to sun beams the primary parallel beam illuminance Pvn
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Pvn = LSC exp (− aV ) [lx] •
(4.6)
on the horizontal plane PV
PV = EV exp (− aV ) [lx]
(4.7)
where aV is the luminous extinction coefficient which for the zenith penetration of sunlight has the value close to 0.1, corresponding to the best atmospheric transmission τ a = 0.9048. However, if the sun is not exactly at the zenith even at the equator solar parallel beams have to penetrate a thicker atmospheric layer and it was found that the air mass has a small influence also on the extinction for which Clear, 1982 introduced the relation aV =
1 9.9 + 0.043 m
(4.8)
This was later published by his LBL colleagues Navvab et al., 1984 in their form aV =
0.1 1 + 0.0045 m
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
(4.9)
136
Stanislav Darula and Richard Kittler For arbitrarily sloped sun beams under the solar altitude
γ S the first studies by Bouguer,
1729 have shown a tendency depending closely on m = 1 / (sin γ S ) except at very low elevations toward the horizon, where it cannot reach infinity due to rounded atmospheric cover of the globe. His luminance measurements of the moon taken during the night on 23rd to 24th November 1725 Bouguer summarized m in a table for altitude angles which are in graphical form in Figure 1. Almost identical tables were published roughly 150 years later by Bemporad, 1904 and these were expressed in mathematical form as a function of solar altitude by Makhotkin, 1960 in a formula 0.5 ( sin γ S + 0.0031465) − sin γ S m=
0,001572
(4.10)
It is interesting to note that while Bouguer 200 years earlier assumed the zenith homogeneous thickness of the atmosphere 7834 m, while Makhotkin´s assumption was 10200 m. Nowadays in literature several approximations can be found to determine the relative air mass as a function of solar altitude. For practical tasks the simplest seems to be Powell´s, 1982 formula
(
)
m = 35 1224 sin 2 γ S + 1
(4.11)
For computer calculations is probably the most precise and quite simple the relation by Kasten and Young, 1989 for lowland regions
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
m=
1
sinγ S + 0.50572 (γ S + 6.07995° )
−1.6364
(4.12)
Figure 1. Comparison of various definitions of air-mass dependence on the elevation angle.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
137
More sophisticated is the general expression for various air masses (e.g. Rayleigh, ozone, mixed gasses or water vapor and aerosol) defined by Gueymard in Gueymard and Kambezidis, 1997. This expression is compared to others in Figure 1. For mountainous locales m can be corrected as mm due to their height above sea level ( v ) in km mm = m (1 − 0.1v )
(4.13)
So, finally for any solar altitude the parallel sun-beam illuminance is: •
On the normal plane to sun beams is
Pvn = ∋ LSC exp (− aV m TV ) [lx]
(4.14)
This relation is graphically represented in Figure 2 where the influence of the luminous turbidity factor in dependence on the solar altitude is documented. •
On a horizontal plane at ground level is
PV = EV exp (− aV m TV ) [lx]
(4.15)
where TV is the luminous turbidity factor expressing roughly the overall broad-band
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
influences of aerosols, water vapor droplets, gas and dust pollution in different locals, when the real atmosphere properties differ from those assumed for the ideal Rayleigh atmosphere. In other words, TV indicates how many ideal atmospheres have to be overlaid on each other to simulate the momentary real situation in the direction of sun beams.
Figure 2. Parallel sun beam illuminance on a normal plane under different turbidities.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
138
Stanislav Darula and Richard Kittler Copying the concept of the turbidity factor TL proposed by Linke, 1922 (which is valid
for the total solar spectrum and is based on irradiance levels) the luminous or illuminance turbidity factor TV was suggested for sunlight (Navvab et al., 1984, Ruck and Kittler, 1987). The true value of TV can be determined only when PV levels are measured, then P − ln V ln LSC − ln Pvn EV = TV = aV m aV m
(4.16)
where Pvn is the normal to sun-beam illuminance, which is measured directly by a tracking illuminance meter. It is to be noted that measuring only the influence of parallel sun-beams needs to shield off all skylight by a narrow cylinder with a small acceptance solid angle, which has to follow the daily sun path. This tracker allows the sensor to be always in the normal position to sun-beams. Much easier is to measure the total horizontal illuminance composed of sunlight and skylight together which is called global illuminance GV . Separately are measured diffuse levels DV with a sensor shaded from sun-beams by a disc or a strip ring. Then also the horizontal PV levels can be derived from horizontal global and diffuse levels GV and DV respectively because
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
PV = GV − DV = Pvn sin γ S [lx]
(4.17)
Figure 3. Solar illuminance on a horizontal plane at ground level.
Under normal sunny conditions TV values can be in the range 1.5 to 6.5, from cold clear situation in mountainous countryside to quite polluted industrial areas or towns. Usually encountered turbidity conditions with TV values are summarized in Table 1.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
139
Thus either normal or horizontal sunlight illuminances at ground level under permanent sunny conditions with different turbidities TV can be determined, as is shown in Figure 2 and 3 respectively. Table 1. Approximate luminous turbidity factors TV expected under clear sky types Characteristic sky conditions
TV 1.5 - 2.5 2.5 - 3.5 3.5 - 4.5 4.5 - 5.5 5.5 - 6.5
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
over 6.5
Typical turbidity state in the site or season Extremely blue sky with clean and dry Winter cold dry air conditions on air mountains or during clear winter days in the countryside or villages Blue sky, clean and unpolluted air close Usual springtime or morning summer to sky type 12 low turbidity conditions in countryside or wind-ventilated residential areas Blue-white sky close to sky type 11, 12 Summertime country air with some or 13 with a visible turbidity veil water vapor content or towns with low pollution White-blue skies with either few Air with higher water vapor content or Cirrostratus or Cirrus clouds or some during summertime smog period in pollution close to sky types 13 or 14 maritime regions White sky with higher turbidity or Polluted areas in larger metropolis or polluted air or associated with sky types industrial regions especially during 13, 14 and 15 smog summertime or noontime Sun shaded by clouds although with At any site when sunlight is absent clear sky types while shaded by a cloud or obstruction
Figure 4. Comparison of theoretical
PV / EV
values with measured ones during a summer exemplary
clear day.
It is important to note that the sunlight transmittance through the atmosphere defined by the PV / EV ratio, as Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
τ a is
140
Stanislav Darula and Richard Kittler
τ a = PV / EV =
Pvn = exp(− aV m TV ) ∋ LSC
(4.18)
Note, that atmospheric transmittance can be measured either under normal incidence of solar illuminance or on the horizontal plane at ground level normalized by equivalent extraterrestrial illuminance respectively. The PV / EV parameter can be usually derived from regular measurements of global and diffuse illuminance as PV / EV = GV / EV − DV / EV and such an example was inserted in Figure 4 using one minute records measured at the Bratislava CIE IDMP Station in Slovakia with geographical latitude 48º17´ N and longitude 17º 8´ E (see http://idmp.entpe.fr). Commenting Figure 4 three logical questions can be posed: 1. Was the day 10th June 2000 in Bratislava a perfect clear day suitable to define the local sunlight availability ? 2. How was the sunlight availability parameter PV / EV derived from minute regular
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
measurements at the local CIE IDMP station data? 3. Why is the PV / EV parameter the best one to characterize sunlight availability?
Figure 5. Daily global and diffuse illuminance during the example clear day measured in one minute steps.
The first two queries can be answered easily by the daily global and diffuse illuminance levels measured and recorded that day with graphical representation in Figure 5 due to local clock time. The fluent rise of global illuminance is of course caused by the gradual increase of the solar PV component with culminating solar altitude at noontime, then decreasing again until sunset. If the correction of the shading ring influence on the DV level is made properly,
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
141
then PV level is equal to the difference GV − DV . Having the information of measured values in local clock time ( LCT ), to calculate the solar altitude in true solar time ( TST ) where the H value in every minute is needed. For the Central European time ( CET ), i.e. at the geographical longitude 15º E from Greenwich is used eq. (4.19) H = CET + ET +
4 (L − 15°) [hour] 60
(4.19)
where ET is the equation of time approximating the hour shifts due to irregular number of days in different months, which can be approximated by Pierpoint, 1982 or Tregeza and Sharples, 1993. ⎛ 4π (J − 80)⎞⎟ − 0.129 sin⎛⎜ 2π (J − 8)⎞⎟ ET = 0.17 sin⎜ ⎠ ⎠ ⎝ 355 ⎝ 373
(4.20)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
And J is the day number, i.e. J = 1 on 1st January and when February has 28 days on 31st December is J = 365.
L is the longitude of the locality, e.g. for Bratislava L = 17.13º. Thus for every recorded measurement the corresponding solar altitude can be determined. The third question is evidently answered when are considered and realized the following facts important for the sunlight availability concept and definition needs: Availability within a half-day or a whole day period is detectable either on a horizontal surface or on a plane in perpendicular position to the momentary sun beam direction. The latter measurement needs a fine sun tracker device to follow the daily sun-path which means a rather expensive instrument and a complex maintenance and control. Therefore stable illuminance meters are cheaper and easier to install and maintain because only the shading ring for the skylight diffuse illuminance meter has to be adjusted and checked to obstruct sun beams from its sensor. However, determining the ratio PV / EV in fact enables the definition of several further parameters of sunlight availability conditions as: (a) the absolute illuminance level on terrain or flat roofs after eq. (4.17), i.e.
PV =
PV (LSC ∋ sin γ S ) [lx] EV
(4.21)
(b) using the known PV / EV ratio also the light transmittance of the atmosphere after eq. (4.18) as well as its luminous turbidity factor TV after eq. (4.16) can be determined, e.g. for the previous exemplary Bratislava measurement case the TV course is shown on Figure 6 in dependence on the solar altitude. Except for solar altitudes under 20º the turbidity seems to be quite stable and around TV = 3.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
142
Stanislav Darula and Richard Kittler (c) the absolute illuminance on a normal plane to parallel sun beams can be determined, i.e. P (4.22) Pvn = V [lx] sin γ S which is necessary to calculate sunlight availability on arbitrarily tilted and oriented planes as shown on Figure 7 (e.g. for sloped solar collectors and photovoltaic panels or hollow light guides set into a roof tilt), because PVβ = Pvn cos i [lx]
where i is the spatial incidence angle definable for a plane with the tilt angle cos i = cos β sin γ S + sin β cos γ S cos A
(4.23)
β (4.24)
and A is the azimuth angle of the tilted plane An measured from the solar azimuth
AS , both taken from the capital North or the absolute value A taken between the direction of the plane normal projected into plan nP and the solar meridian in Figure 7, i.e. A = AS − An . Therefore PV / EV is the primary sunlight availability determinant of sorting importance.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
4.3. Sunshine Duration as the Time Descriptor of Sunlight Availability Taking regular simultaneous recordings of global GV and diffuse DV illuminance as well as Ge and De irradiance, e.g. in one minute time steps sunshine duration can be also derived in a half-day or during the whole day which determines the monthly average when summarizing all sunny periods. Note that in accordance with the WMO, 1982 standardization based on irradiance measurements of Pen the sunny moments are considered when Pen ≥ 120 W/m2. Thus simultaneous regular irradiance recordings have to be available and the precise summation of hours with sunshine ∑ S has to be calculated. In annual daylight climate studies is used the monthly relative sunshine duration s , which is defined by the sum of daily measured sunshine hours ∑ S during a particular month
s=
∑S ∑S
(4.25)
a
where Sa is the astronomically possible sunshine duration in a particular day during the chosen month, which is for any whole day the time between sunrise and sunset, i.e. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability S a = H sr − H ss =
arccos (− tan ϕ tan δ ) 7.5°
143 (4.26)
where H sr and H ss are hours of sunrise and sunset respectively in TST . For an absolutely cloudless and clear period, when during a half-day or the whole day s = 1 different turbidities can occur in the range TV = 1.5 to 6.5 which determine sunlight and skylight availabilities.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 6. Daily course of the luminous turbidity factor in the exemplary clear day.
Figure 7. A scheme of the co-ordinate system for the illuminance calculation on a tilted surface.
The life time everyday experience testifies that an assumption of an ideal stable clear day with the cloudless and stationary atmosphere is seldom true. In reality in some climate regions, e.g. in desert or Mediterranean zones, the occurrence of clear days is quite numerous Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
144
Stanislav Darula and Richard Kittler
and probably associated with long periods of sunshine. However, local increase of turbidity and cloudiness can change the daylight situation considerably. All such changes are roughly measured and documented by GV and DV data collected regularly during any day. The consideration of these changes can be imagined from the reviewed fluent or irregular illuminance courses. More detail analysis can be done via the courses of the ratio GV / EV including the influences of the atmospheric transmittance as well as turbidity effects and via the ratio DV / GV expressing the proportion of skylight in the global or total horizontal illuminance at ground level. Simplifying the complex and numerous influences of turbidity and cloud type as well as cloud cover changes four basic typical daylight situations, i.e. clear, cloudy, overcast and dynamic half-days, can be considered: •
Situation 1. Starting with the exemplary clear day with the daily illuminance courses on Figure 5 the resulting trend of GV / EV including DV / GV graph in Figure 8 is showing an overall high efficiency and utilization of the extraterrestrial available illuminance with the prevailing sunlight and a lower DV / GV ratio. The influence of lower and higher turbidity factor TV can be expected. The luminous turbidity can rise
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
either due to the natural process of evaporation resulting in the higher humidity in the atmosphere or caused by human activities polluting air, e.g. car traffic smog, industrial smoke etc. The lowest turbidities usually occur during clear winter days when the flow of cold and dry arctic air contains low water vapor and pollution content. Anyhow, the relative sunshine duration during the clear sky situation is rather high, i.e. more than 0.75, usually 0.9.
Figure 8. Exemplary
GV / EV
and
DV / GV
ratios for clear day situation 1.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
145
Figure 9. Measured daily courses of global and diffuse illuminance during a cloudy day.
•
Situation 2. Several natural events like inversion fog, volcano pollution or sand storms as well as various cloud cover will cause the situation of a cloudy day when the sunlight is considerably filtered and reduced and the skylight is prevailing. Such exemplary cloudy day is presented on Figure 9. Therefore the ratio GV / EV in Figure 10 is documenting the low utilization of the extraterrestrial available illuminance and high DV / GV ratios also in Figure 10. Moreover the subdued
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
sunlight is also restricted in time, thus the relative sunshine duration during cloudy days is within the rather wide range 0.01 to 0.75, most usually around 0.4. Note, that in the example on Figure 9 and 10 the daily relative sunshine duration was only 0.07 measured in Bratislava on 20th May 1995.
Figure 10. The cloudy day situation analyzed after
GV / EV
altitude. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
and
DV / EV
ratios in relation to solar
146
Stanislav Darula and Richard Kittler
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 11. Measured daily courses of global and diffuse illuminance during an overcast day.
Figure 12. Typical grouping of
•
GV / EV
and
DV / GV
ratios on an overcast day.
Situation 3. Dense layers of cloudiness, usually vast Stratus clouds sometime also in combination with dense fog will block the sunlight during the whole day or halfovercast day. Therefore GV / EV = DV / EV and luminous turbidity factor TV is very high and PV is none during this situation, thus the relative sunshine duration is under 0.01 too. Such conditions are frequent in Central Europe during autumn and wintertime and an exemplary measured situation 3 is on Figure 11. In consequence is a typical low value GV / EV = DV / EV and then the highest ratio DV / GV = 1 shown in Figure 12.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability •
147
Situation 4. This is the dynamic situation during a half-day caused by patchy moving smaller clouds which sometimes do not change the sky pattern considerably but temporarily shade and uncover the momentary sun position. Thus different measured TV values in the range 2.5 to over 6 due to cloud light transmittance influence the
GV level in the minute data shown in Figure 13. As the GV / EV and DV / EV rise and fall with frequent cloud movement also the GV / EV and DV / GV diagrams in Figure 14 are messy showing dynamic quick changes. The relative sunshine duration is also within the rather wide range 0.01 to 0.75, but in this situation most usually around 0.6. In the exemplary case on Figure 13 and 14 the daily relative sunshine duration was 0.332.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
The above mentioned typical half-day or daily daylight situations can be helpful for the evaluation of daily courses or extreme efficiency of daylight and solar installations. However, the overall economic benefit, profiting energy availabilities or purpose oriented design needs long-term probable data, i.e. information on their usefulness in different months, seasons and years during their existence, performance and utilization. Therefore typical annual situations occurring in any locale or region are needed. These also depend on local monthly and yearly relative sunshine duration. To document and roughly specify such yearly situations in several climate zones Oki et al., 1991 compiled relative sunshine duration at various locales worldwide. Few examples can illustrate the monthly sunlight availability in different places in various climates.
Figure 13. Example of a dynamic daylight situation in summertime.
Extremely long sunshine seasons corresponding to very high relative sunshine duration over 0.7 can be expected only in tropical desert climate with long hot and dry months, i.e. there sunshine lasts over 70 % of the astronomically possible daytime throughout the year. In Figure 15 are shown monthly changes of s that form the average yearly relative sunshine
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
148
Stanislav Darula and Richard Kittler
duration sY in Las Vegas ( sY = 0.83), Kimberley in South Africa and Alice Springs in Australia (both with sY = 0.79), Cairo in Egypt ( sY = 0.77), Los Angeles and Sacramento, USA (both sY = 0.74).
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 14. Example of a dynamic daylight situation characterized by
GV / EV
and
DV / GV
ratios.
Figure 15. Monthly distribution of sunshine in desert locales after Oki et al., 1991.
Except in Sacramento where seasonal variations are quite noticeable, all other locations in the tropical hot and dry zone are relatively evenly distributed during the year within the range of relative sunshine duration s = 0.65 – 0.85 in spite of geographical latitude
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability difference (e.g. Las Vegas,
149
ϕ = 36º N to Kimberley, ϕ = 24.7º S). Thus many clear days
can be expected in all these locales.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 16. Monthly distribution of sunshine in Subtropical and Mediterranean zones.
Figure 17. Examples of monthly sunshine variations in the equatorial climate of Indonesia.
A different monthly distribution can be noticed in the Subtropical and Mediterranean climate with average yearly relative sunshine duration where s y [MSOffice1] occurs often in the range 0.6 to 0.7. Some exemplary locales in this region are shown on Figure 16 located in the Northern hemisphere (full symbols) and Southern hemisphere except New Delhi (open symbols) where monthly courses of s are in reciprocity positions indicating relevant seasonal variations with roughly all s values quite high, but in the range 0.5 in wintertime or monsoon season to 0.8 in summertime.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
150
Stanislav Darula and Richard Kittler
Very special sunshine conditions are in the equatorial zone where irregular daily evaporation and rain periods vary turbidity and cloudiness formations reducing sunshine considerably. From the view of sunshine duration is interesting the territory of Indonesia with meteorological stations within the latitudes for Medan, ϕ = 3.5º N with sY = 0.4 to Dili on the Timor Island,
ϕ = 8.5º S with sY = 0.79 (Figure 17). It is quite evident that all sunshine
duration courses are rather close around to the horizontal lines of yearly averages without remarkable mutual tendency. It seems that the Western located sites like Medan and Padang get considerably less sunshine than the Eastern stations in Palu and Dili. In fact it is not absolutely true that equatorial locations have the maximal solar energy profit due to possible relevant reduction in sunshine duration during noontime and afternoon hours. Detail studies of sunshine duration yearly trends in Europe (Kittler, 1995) are probably valid for the temperate climate zone where average yearly relative sunshine duration sY is reduced with the progressing latitude from 40º - 60 º to the North with respective decrease of sY = 0.6 to approximately 0.3. All European sunshine duration courses document a similar
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
trend of regular seasonal changes (full symbols in Figure 18). Further analysis of the probability of occurrence of four basic daylight situations in dependence on the monthly relative sunshine duration have shown also links between Central European (Bratislava) and Mediterranean (Athens) long-term data (Darula et al., 2004). Although the monthly average of relative sunshine duration s have in different climate zones world-wide random courses, anyhow all these are resulting from the occurrence probability of typical half-day daylight situations already mentioned. Using eight year Bratislava CIE IDMP measurements of halfday relative sunshine duration monthly averages within the range s = 0.09 – 0.734 as well as the five year span of Athens CIE IDMP data with the range s = 0.21 – 0.891, the following approximations were found: In the clear situation 1 •
for half-day during morning the probability of this situation in % can be approximated and linked with monthly s values after
Pm1 = 100 (0.55 s − 0.95 s 2 + 1.65 s 3 ) [%] •
(4.27)
during the afternoon half-day
Pa1 = 100 (0.62 s − 0.77 s 2 + 1.26 s 3 )[%]
(4.28)
It is to be noted, that although these relations are quite logical, the empirical formulae were not determined before due to the absence of regularly measured data or long term records. These trends were tested also by applying 5-minute data sets gathered during eight years 1994-2001 but instead of the relative half-day sunshine duration the monthly average sunshine duration was used. Although the spread of data seemed to be wider the above formulae can be applied even in relation to the monthly averages of sunshine duration which are universally available often representing also long-term local climatic conditions. With regard to Athens data it is expected that under sunshine durations over 0.93 there is a 100% probability of a clear or almost cloudless day, as may be an often case in desert climate.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
151
Figure 18. Some locations with monthly sunshine variations in the temperate climate zone.
In the cloudy sky situation 2 conditions during a half-day have a different trend with dependence on sunshine duration because the probability is culminating at 0.4 and decreasing to no probability at sunshine duration around s = 0.93, thus a curve and a line can simulate both the morning and afternoon trends with a zero value in the range s ≥ 0.93 thus •
for the half-day during morning in the range s = 0 to 0.5
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Pm 2 = 100 s (1.2 − 1. 85 s ) [%]
(4.29)
in the range s = 0.5 to 0.93
Pm 2 = 66.86 (0.93 − s ) [%]
(4.30)
in the range s = 0.93 to 1
Pm 2 = 0 [%]
(4.31)
during the afternoon half-day in the range s = 0 to 0.5
Pa 2 = 100 s (1.2 − 1. 6 s ) [%]
(4.32)
in the range s = 0.5 to 0.93
Pa 2 = 46.51 (0.93 − s ) [%]
(4.33)
in the range s = 0.93 to 1
Pa 2 = 0 [%]
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
(4.34)
152
Stanislav Darula and Richard Kittler
In contrary to clear and overcast cases the cloudy half-days are distributed more randomly but seldom or with no cases at sunshine duration extremes, i.e. close to s = 0 or 1 where the probability of cloudy sky occurrence is none. A logical decreasing trend is associated with the overcast sky situation 3 as with the rising sunshine duration the overcast skies diminish thus •
for the half-day overcast morning the probability of overcast sky is
Pm3 = 100 (1 − s )
2.47
•
[%]
(4.35)
during the afternoon half-day
Pa 3 = 100 (1 − s ) [%] 2.7
(4.36)
Of course the highest percentages of fully dull day or longer periods are usual in Bratislava during the winter months when the monthly mean relative sunshine durations can be extremely low with monthly averages around 0.1, while during summer months especially in Athens overcast half-days are very seldom. A similarity to cloudy conditions can be noticed under dynamic situations when the clouds are partly interfering and from time to time shading sunlight. On one side, when the sunshine duration is rather low the dynamic cases are also rare while on the other side with very high sunshine duration the overall conditions are changing to clear sky cases. In contrary to all other types the difference between morning and afternoon distributions seems to be less and the simulated probability curves are indicating this trend expressed by the best fit formulae in %:
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
•
for the half-day dynamic morning in the range s = 0 to 0.93
Pm4 = 100 s 2 (2.5 − 2.7 s ) [%]
(4.37)
in the range s = 0.93 to 1
Pm 4 = 0 [%] •
(4.38)
while during the afternoon half-day in the range s = 0 to 0.96
Pa 4 = 100 s 2 (3.7 − 3. 85 s ) [%]
(4.39)
in the range s = 0.96 to 1
Pa 4 = 0 [%]
(4.40)
Now, the summary of all four typical daylight conditions can be done in two diagrams characterizing the distribution of different types for morning and afternoon half-days Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
153
dependent on the monthly average relative sunshine duration. Although both Figure 19 and 20 are based on Bratislava and Athens data there is a fair chance that the distribution tendency is quite general and applicable in any locality assuming that its actual or long-term average monthly relative sunshine duration is known. These distribution probabilities form crucial information in regions where no other measured daylight descriptors except sunshine duration is available. Due to the percents of probability in all four type groups (after Figure 19 and 20) the numbers of half-days are to be rounded up into integers and summarized in the total number of half-days ( Nm or Na ). In the months of January, March, May, July, August, October and December must not exceed 31 morning and 31 afternoon cases, in April, June, September and November there are only 30, in February either 28 or in a leap year 29 morning and afternoon cases are valid. Because, in the rounding process there might come accidentally to overestimating of these maximums the number of dynamic half-days ( Nm 4 or Na 4 ) in any month might be reduced while calculated from the maximum, i.e.
Nm4 = Nm − (Nm1 + Nm2 + Nm3)
(4.41)
Na 4 = Na − (Na1 + Na 2 + Na 3)
(4.42)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
where Nm1 or Na1 is the integer number of clear half-days in a particular month, Nm2 or Na 2 is the integer number of cloudy half-days in the same month, Nm3 or Na 3 is the integer number of overcast half-days in that month. When the morning and afternoon half-days in a particular month are thus classified the monthly sunshine duration s has to be split into subsequent parts following the most probable redistribution. Two possible redistribution models were tested either with the recommended probabilities or appropriate number of half-days. The sunny half-days in situation 1 are assumed to be almost fully covered by sunlight as sunshine duration is taken sm1 = sa1 = 0.92. In this way a full half-day long sunshine is allocated to clear situations but only partly to sunny intervals during the dynamic and cloudy half-days, while overcast days are assumed to be totally without any sunshine. The relative sunshine duration has to be redistributed among the three relevant half-day situations, so two possible models have been tested for both mornings and afternoons once taking functions of probabilities, while in the other model were used their half-day numbers:
sm = f ( Pm) = 0.92 Pm1 + 0.21Pm2 + 0.56 Pm4
(4.43)
0.92 Nm1 + 0.25 Nm2 + 0.61Nm4 Nm
(4.44)
sm = f ( Nm) =
sa = f ( Pa ) = 0.92 Pa1 + 0.05Pa 2 + 0.61Pa 4
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
(4.45)
154
Stanislav Darula and Richard Kittler
sa = f ( Na ) =
0.9 Na1 + 0.25Na 2 + 0.5Na 4 Na
(4.46)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 19. Daylight morning half-day situations with probability of four basic sky conditions.
Figure 20. Daylight situation probability during afternoon half-days.
Applying these relations in Figure 21 are shown comparisons with measured data respecting monthly sunshine durations in years 1994-2001 recorded in Bratislava and in years 1992 – 1996 in Athens during morning half-days. In the same way the afternoon data were used in Figure 22. It seems that both models could be used but it has to be noted that in practical tasks the monthly numbers have to be used therefore it is recommended to use the distributions due to long time average s data.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
155
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 21. Athens and Bratislava date confirmed the proposed redistribution model.
Figure 22. Athens and Bratislava data confirmed the redistribution.
The sunny intervals are probably proportionally distributed within morning and afternoon clear dynamic or partly cloudy half-days to correspond with the overall monthly mean value of the relative sunshine duration following the condition
s=
sm + sa 2
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
(4.47)
156
Stanislav Darula and Richard Kittler
The modeled redistribution trends are slightly overestimating lower sunshine duration dozes and underestimating those with higher sunshine duration seasons. These assumptions respect some safety risks in further modeling means when simulating sky patterns and the distribution during sunny intervals have to be considered and specified in morning and afternoon half-day courses in absolute sun and sky illuminance values as well as when modeling sky luminance distributions either outdoors or indoors within window solid angles. Although the overall monthly characteristic distribution of four basic daylight situations can predict the sunlight and skylight availability in the particular month quite closely to reality, the exact date of their occurrence is in various years distributed randomly. However, due to the daily small solar declination changes within a particular month the sun path and solar altitudes are close. Therefore theoretically predetermined sunlight and skylight availabilities can be approximated with practical preciseness throughout the year in any locale world-wide when sunshine duration is available.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
4.4. Sky Luminance Distribution as the Determinant of Skylight Besides the penetration and reduction of parallel sun beams through the atmosphere these are also scattered by atmospheric particles in all directions. Resulting is the sky vault luminance forming a huge source of diffuse skylight. The light scattering and multiple interreflection due to air transmittance, turbidity and cloudiness is producing the sky luminance pattern seen from the globe surface. This is imagined to be the luminance distribution on a fictitious sky hemisphere. Nowadays there are available quick sky luminance scanners which can measure these patterns in their irregularities and influences of space cloudiness distributions. However, for practical purposes typical skies are imagined to be quasi-homogeneous due to almost even turbidities and often occurring diffuse cloud layers in the range from absolutely clear and cloudless coupled with sunshine to densely overcast and dull with no sunlight at all. In this range are standardized typical sky luminance patterns by CIE, 2003 and ISO, 2004 with the same content. In the broad band visible spectrum the sunlight diffusion can be defined and represented by a luminance body with a symmetrical form rotational around the direction of the sun beam which can be specified by its sectional curve called the scattering indicatrix. Sometimes the absolute indicarix is used but more often relative ones are defined by the ratio of luminance in any arbitrary direction La divided by the normalizing luminance Ln in a 90 degree angle to the direction of the sun beam. Because the relative scattering indicatrix is usually defined in the form of a mathematical function depending on a scattering angle from the sun beam, it is called the indicatrix function f (χ ) , thus
f (χ ) =
La Ln
(4.48)
and it can be measured along the solar almucantar, i.e. the horizontal circle on the hemisphere at the sun position (Kittler, 1993).
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
157
In a perfectly homogeneous and turbid environment, e.g. in dense fog the scattering is uniform to all directions, thus f (χ ) = 1, which is characteristic for the so called Lambertian diffusion with the same luminance in all directions. In this case the indicatrix is circular and the luminance body forms a sphere. In fact since Lambert, 1760, 1773 made the assumption that the whole sky has the unity uniform luminance and this sky was accepted as the only one for the calculation methods for interior daylight illuminance due to its simplicity. Furthermore this assumption enabled to introduce the Daylight Factor concept with graphical and protractor tools for practical daylight estimation. In the CIE, 2003 and ISO, 2004 standards this uniform overcast sky is in the fifteen sky set included representing brighter overcast skies with no luminance gradation from horizon to zenith. However, the winter darkest and dull overcast skies measured and defined first by Kähler, 1908 has shown that there is a gradation of luminance from horizon Lh to zenith LZ which roughly follows the relation for arbitrary sky luminance La at any elevation
γ La = 1 + 2 sin γ Lh
(4.49)
Later American measurements by Kimball and Hand, 1921 with the same results persuaded Moon and Spencer, 1942 to recommend the well known formula dependent on zenith luminance which was standardized by CIE, 1955
La 1 + 2 sin γ = LZ 3
(4.50)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
or with the angular distance of the sky element from zenith Z
La 1 + 2 cos Z = 3 LZ
(4.51)
with Lh / LZ = 1/3, which is now called the Traditional Overcast Sky. In the same year Petherbridge, 1955 published his results of luminance measurements of the overcast sky when the ground was covered by fresh snow and found the gradation
La 1 + cos Z = LZ 2
(4.52)
or Lh / LZ = 1/2. Because the luminance gradation on clear skies has a reversed trend with an exponential form the CIE, 2003 and ISO, 2004 standards model the gradation function ϕ (Z ) using the formula
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
158
Stanislav Darula and Richard Kittler
ϕ (Z ) = 1 + a exp ⎛⎜
b ⎞ ⎟ ⎝ cos Z ⎠
(5.53)
thus for the zenith itself as Z = 0º
ϕ (0°) = 1 + a exp b
(5.54)
⎛ b ⎞ 1 + a exp ⎜ ⎟ ϕ (Z ) ⎝ cos Z ⎠ = ϕ (0°) 1 + a exp b
(5.55)
where a a and b are parameters standardized in the CIE, 2003 and ISO, 2004 which model the gradation trends of all 15 sky types as shown in Figure 23. As under perfect densely overcast skies (sky type 1, 3 and 5) the indicatrix function has a unity value the relative luminance pattern is formed by eq. (455) with an equal azimuthal uniformity, i.e.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
⎛ b ⎞ 1 + a exp ⎜ ⎟ La ϕ (Z ) cos Z ⎠ ⎝ = = 1 + a exp b LZ ϕ (0°)
Figure 23. Curves of gradation functions after ISO/CIE standard with parameters a, b.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
(4.56)
Sunlight and Skylight Availability
159
However, if any brighter spots around the sun position can be seen from ground level then both indicatrix as well as gradation functions will specify the luminance distribution on the sky vault in relative terms, i.e. normalized to zenith luminance, thus La / LZ =
f (χ ) ϕ (Z ) f (Z S ) ϕ (0°)
(4.57)
Where Z S is the solar zenith angle,
Z is the zenith angle of any sky element,
χ is the closest angular distance of the sky element to the sun position which after spherical trigonometry can be calculated after
χ = arccos (cos Z S cos Z + sin Z S sin Z cos AZ )
(4.58)
where AZ is the azimuth angle of the sky element meridian from the sun meridian.
The indicatrix function f (χ ) is also standardized in the CIE, 2003 and ISO, 2004
documents via c , d and e parameters defining six standard indicatrix curves after ⎡ ⎛ π ⎞⎤ f (χ ) = 1 + c ⎢exp (dχ ) − exp ⎜ d ⎟⎥ + e cos2 χ ⎝ 2 ⎠⎦ ⎣
(4.59)
which for the solar zenith angle Z S is
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
⎡ ⎛ π ⎞⎤ f (Z S ) = 1 + c ⎢exp (d Z S ) − exp ⎜ d ⎟⎥ + e cos 2 Z s [MSOffice2] ⎝ 2 ⎠⎦ ⎣
Figure 24. Standard indicatrix formula with parameters
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
c, d , e.
(4.60)
160
Stanislav Darula and Richard Kittler
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 25. Scheme of the sky vault with inserted indicatrix curves assuming standard indicatrix 5 with a unity gradation function.
Figure 26. Five ISO/CIE overcast sky patterns of relative luminance distribution.
Originally this concept of expressing the indicatrix with a prolonged forward scattering after eq. (459) was proposed by Krat, 1943 and it proved a good approximation for homogeneous clear and turbid atmospheres with the turbidity factors TV (Kittler, 1985). Now almost all currently available sky luminance models, e.g. by Kittler, 1967, CIE, 1973, Perez et al., 1991 or Igawa et al., 2004 apply the exponential formula. Standard parameters c - e are given also in Figure 24 with the curves specifying the six indicatrix standards. A graphical interpretation of indicatrix curves is better to imagine in concentric diagrams, which can be inset into the section of the sky hemisphere in Figure 25 showing the directional luminance influences toward the sky center under the assumption of a unity gradation function. The relative comparative lengths of the La and LZ arrows are indicating the luminance distributions increasing with the angular distance to the sun position
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
161
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
with the minimum luminance at ninety degrees from the sun. Together with the gradation influence in an arbitrary case these form the final sky luminance pattern which can be measured by a sky luminance scanner. On the other hand after measured sky luminances on the solar almucantar having a constant gradation influence the actual indicatrix course can be checked (Kittler, 1993). A similar method can be applied to check the gradation course on the circular section of the sky hemisphere which has the same angular distance from the sun and passes the zenith. In this way several sky scans were analyzed by Kittler et al., 1998 or Markou et al., 2007a while a method for analyzing sky scans to obtain sky types was published by Tregenza, 2004. Using eq. (456) the relative luminance pattern on all fifteen standard sky types can be calculated applying their indicatrix and gradation functions. These relative sky luminance distributions have to be rotated along the daily sun-path following the orientation of solar meridian. For example, in the case when the solar meridian is placed horizontally in the equidistant plan projection of the sky hemisphere with 30 degree solar altitude the schematic sky patterns are shown on Figure 26 for five overcast sky types, on Figure 27 for cloudy sky types and on Figure 28 for clear sky types.
Figure 27. Typical cloudy sky patterns under relative quasi-homogeneous conditions.
Some complex economic and energy utilization evaluations can reach conclusions only when annual local frequent sky types can be taken into account. Due to the slope, orientation, obstruction and placement of the illuminated surface or opening the different areas of the sky patterns will affectively act in their solid angle. For such sophisticated tasks and design solutions the above mentioned formulae served already to developed new computer methods like MAM (Kittler and Darula, 2006) or HOLIGILM (Kocifaj et. al, 2008) implemented in user friendly and simple to handle computer programs including the ISO/CIE General Standard Sky like ModelSky (Kocifaj and Darula, 2002) or SkyModeller (Roy et. al., 2007).
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
162
Stanislav Darula and Richard Kittler
The latter overcomes also the tedious procedure to calculate the local sun position changes world-wide as well as the time changes in sky patterns. Another application of using Virtual Sky Domes was proposed by Wittkopf, 2004 within CAD-based software.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 28. Clear sky patterns after ISO standard.
4.5. Diffuse Illuminance as a Resultant Descriptor of Skylight Availability Following the basic Lambert, 1760 law of classical photometry valid for larger plane light sources defining the illuminance in any surface element dI a caused by the source luminance La seen within the solid angle element dω from the illuminated element. This spherical solid angle dω is to be reduced according to the placement of the illuminated plane by its projection onto the illuminated surface thus it is dω P and
dI a = La dω P
(4.61)
then the total illuminance from the planar source is
I a = ∫∫ La dω P
(4.62)
In the case of the horizontal sky diffuse illuminance
DV = ∫∫ La cos Z dω
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
(4.63)
Sunlight and Skylight Availability
163
There are several possibilities to determine the absolute outdoor skylight illuminance level. When the whole sky vault is illuminating a horizontal outdoor surface there are three possible solutions to proceed with the integration after (463), i.e. • • •
following the horizontal circular strips, following the system of lunes with their culmination in the North and South poles, following the vertical half lunes centralized in the sky zenith.
The general classical form follows the overall integration of the specific luminance in elements of the sky solid angles projected onto the illuminated, i.e. in the case of a horizontal plane: 90° 360°
DV =
∫ ∫L
a
sin Z cos Z dZ dA
(4.64)
Z =0° A=0°
where La is luminance in the centre of an arbitrary sky element which varies from place to place on the sky vault in accordance with its angular zenith distance Z and azimuth angle A and is given by eq. (457) as
La = LZ
f ( χ ).ϕ ( Z ) f ( Z s ).ϕ (0)
(4.65)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
However, the normalizing zenith luminance is a constant for all sky elements and is in fact specifying the absolute luminance level of the sky while its denominator is adjusting this level constantly for a certain sky type, thus do not have to be integrated, so (464) can be rewritten in 90° 360°
DV = LZ
f (χ ) ϕ (Z )
∫ ∫ f (Z ) ϕ (0°) sin Z cos Z dZ dA
Z = 0° A = 0°
(4.66)
S
Nowadays utilizing the computer possibilities the classical integration can be replaced by a numerical summation. In the case of the total outdoor illuminance from the whole sky vault DV it is possible to divide the hemisphere by horizontal sky rings, or by vertical sky meridians converging in the zenith or by horizontal lunes. The relation of DV and LZ is very interesting from another point, because the LZ / DV has proved to be a classifying or sorting ratio for sky types when DV and LZ are measured simultaneously and regularly. This ratio can be derived from eq. (466) as
LZ / DV =
90° 360°
f (Z S ) ϕ (0° )
∫ ∫ f (χ ) ϕ (Z ) sin Z cos Z dZ dA
Z = 0° A = 0°
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
(4.67)
164
Stanislav Darula and Richard Kittler
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 29. Ratio
LZ / DV
Figure 30. Ratio
for all fifteen standard skies under solar altitudes not exceeding 70º.
LZ / DV
for all fifteen standard skies under solar altitudes above 70º.
This classifying ratio for sorting sky types is basically dependent on solar altitude (Kittler, Darula and Perez, 1998 and Darula. Kittler and Wittkopf, 2006) as shown in Figure 29 and 30 (for tropical countries with solar altitudes 70 – 90 degrees). Except for sky types
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
165
with a unity indicatrix, when LZ / DV is constant for all sunheights, the indicatrix influence causes various rises of appropriate standard curves.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 31. Coverage of the standard sky space with data collected in Bratislava during August 2001 in 5 – minute time steps.
Figure 32. August 2001 data in 1 – minute time steps even closer represent monthly changes.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
166
Stanislav Darula and Richard Kittler
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 33. Sunless dull December 1995 shows the prevailing occurrence of overcast sky types.
When regular zenith luminance recording is done simultaneously with diffuse skylight illuminance, then monthly either 1 or 5-minute data can be merged into Figure 29 as shown in Figure 31 to 33. Thus the frequency of sky type occurrence is evidently shown in any month and can be compared with the monthly average relative sunshine duration. Examples are presented in the case of two extreme Bratislava months, i. e. in the sunniest month which was August 2001, when the monthly average relative sunshine duration was the highest, s = 0.734 in Figure 31 and 32. In the vicinity of horizontal lines for overcast sky types there are only few lonely points indicating moments with cloud presence during some dynamic halfdays, while the prevailing clear skies occupy the densely covered area of clear sky curves rising with solar altitudes up to 58º. On the other extreme is the dullest overcast winter month, i.e. December 1995 with s = 0.09, which is presented in Figure 33. The merged December data are restricted to only low solar altitudes under 19º and show the maximum area covered over overcast sky types 1 to 6 with only few cases around the clear sky type 12. Anyhow, the days in wintertime are short and almost without any sunshine, even the skylight penetration is reduced to small levels with extremely low DV / EV = 0.05 to 0.15 ratio values. Under the assumption of homogeneous atmospheric conditions a quite perfect interrelation exists between the absolute zenith luminance and the resulting diffuse horizontal illuminance for overcast sky type 5 which has an overall uniformity
and
2 LZ = DV / π = 0.318311 DV [cd/m ]
(4.68)
2 LZ = 42.59 ( DV / EV ) sin γ S [cd/m ]
(4.69)
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
167
i.e. in this case B = 42.59 as shown also in Table 2. Similarly for sky type 1 is B = 54.63, for sky type 3 is B = 48.3, while for all these cases is C = 1, D and E = 0. If the solar altitude is under 75º then an approximation formula for the calculation of the absolute zenith luminance can be used (Kittler et al., 1998), then ⎡ B (sin γ S )C ⎤ [cd/m2] + LZ = (DV / EV ) ⎢ E sin γ S⎥ D ⎣ (cos γ S ) ⎦
(4.70)
While the ISO/CIE standards define only relative sky luminance distribution after eq. (457) it is possible to determine also absolute luminance models after eq. (470) if DV / EV is known. Table 2. Parameters for calculating DV / EV in absolute
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
units and recommended DV / EV ratios Sky
Sky
type 1
code I.1
2
DV / EV
Parameter A1 *)
A2
B 54.63
C 1.00
D 0.00
E 0.00
0.10
I.2
12.35
3.68
0.59
50.47
0.10
3
II.1
48.30
1.00
0.00
0.00
0.15
4
II.2
12.23
3.57
0.57
44.27
0.20
5
III.1
42.59
1.00
0.00
0.00
0.22
6
III.2
11.84
3.53
0.55
38.78
0.30
7
III.3
0.957
1.790
21.72
4.52
0.63
34.56
0.35
8
III.4
0.830
2.030
29.35
4.94
0.70
30.41
0.40
9
IV.2
0.600
1.500
10.34
3.45
0.50
27.47
0.35
10
IV.3
0.567
2.610
18.41
4.27
0.64
24.04
0.3
11
IV.4
1.440
-0.750
24.41
4.60
0.72
20.76
0.26 0.30#)
12
V.4
1.036
0.710
23.00
4.43
0.74
18.52
13
V.5
1.244
-0.840
27.45
4.61
0.76
16.59
14
VI.5
0.881
0.453
25.54
4.40
0.79
14.56
15
VI.6
0.418
1.950
28.08
4.13
0.79
13.00
0.25 0.30#) 0.26 0.30#) 0.28 0.30#) 0.28 0.30#)
*) These sky types are usually occurring without sunlight. #) Situations with a shaded sun disc, i.e. PV / EV = 0. Note: Parameter B was derived for the average value of
LSC =
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
133.8 klx
168
Stanislav Darula and Richard Kittler Also for certain purposes when only the minimum or mean DV / EV ratio is sufficient or
can be assumed, then the simplest way to determine skylight illuminance DV outdoors is: DV = 133.334 ∋ ( DV / EV ) sin γ S [lx]
(4.71)
The same calculation can be done for any sky type respecting frequent conditions in an arbitrary locale. In sunny climates, in tropics or in the summer season an interrelation of simultaneous skylight and sunlight has to be taken into account under cloudless and clear sky types 11 to 15. There is a dependence of DV / EV on PV / EV because both are approximately influenced by the luminous turbidity factor TV (Darula and Kittler, 2005), i.e. DV / EV =
( A1 TV
+ A2 ) sin γ S + 0.7 (TV + 1) X + 0.04 TV B X + E sin γ S
(4.72)
where X = (sin γ S ) / (cos γ S ) and parameters A1 , A2 , B , C , D and E are in Table C
D
2 while usual TV are in Table 1.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
4.6. The Daily Illuminance Courses during Monthly Measurements Sunlight and skylight studies try also to encompass all seasonal and annual illuminance courses using regular measurements to get closer information on real long-term daylight conditions. In temperate climate zones whether in Americas, Asia or Europe the seasonal weather changes in sunshine, temperature and cloudiness follow the sunpath declination range throughout the year. Under such conditions every month has its special character expressed and reflected in sunlight and skylight availability. Monthly records of everyday global and diffuse illuminance measurement results were already suggested by Dumortier, et al., 1994. The program TIMElux 1.0, available in folder Library on the http://idmp.entpe.fr/ allows to plot time dependent variables in an efficient and compact way for the entire month of CIE IDMP measurements. A similar system was used in some CIE IDMP stations and an example for the sunniest month August 2001 as is shown in Figure 34. The measured global illuminance courses are represented by solid curves while diffuse ones have dashed curves. Due to plenty of sunny periods with high sunlight the overall illuminance are very high especially at noon time. Note, that a typical clear half day during morning on 6th August was changed to a dynamic afternoon. At the same time during morning occur very low diffuse levels resulting from the blue cloudless sky. In the afternoon the incoming cloudiness causes the increase in diffuse illuminance. Higher turbidities of the whitish skies also rise the diffuse component as can be noticed e.g. on the 17th and in the morning of the 20th August 2001.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
169
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Sunlight and Skylight Availability
Figure 34. Daily global and diffuse illuminance courses measured in 1-minute steps in a summer month.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Stanislav Darula and Richard Kittler
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
170
Figure 35. Daily global and diffuse illuminance courses measured in 1-minute steps in a winter month.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 36. The P-D-G diagram with inserted data measured in August 2001.
Figure 37. The P-D-G diagram with inserted data measured in December 1995. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
171
172
Stanislav Darula and Richard Kittler
The dullest winter month in the longer term of Bratislava recordings was the December in 1995 shown in Figure 35. Except for few days like 21st and 28th , which are evidently with sunshine, all days are very cloudy or overcast with very low illuminance levels altogether. For instance on the 1st, 10th , 16th or 31st December 1995 were recorded illuminance levels below 6000 lx. Comparing clear days in August 2001 and those in December 1995 remarkable are also noon illuminance levels. While in August are quite frequent global illuminances of about 100 000 lx, in December only levels around 30 000 lx occurred. More progressive evaluation methods were published by Kittler and Darula, 2002, using the P-D-G diagram with the insert data recorded during a particular month. The aim of such analysis should be a further knowledge on basic interrelation between sunlight and skylight components under various turbidities and sky patterns. The monthly characteristics are even more distinctly shown in Figure 36 and 37. The ratio DV / EV is in the diagrams suppressed to the vertical line when no sunshine is present, but under sunny conditions especially in Figure 36 this ratio is gradually lowered with the increase of PV / EV . In addition to the possibilities to characterize also the sky types in accordance with the LZ / DV selection ratio already shown in Figure 32 and 33, the overall character of the monthly frequency of occurrence of sky types can be analyzed using the LZ / DV selection
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
ratio in relation to simultaneous increase of GV / EV .
Figure 38. The
LZ / DV
relation to simultaneous
GV / EV ratio
Bratislava 5-minute data.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
in case of August 2001 using
Sunlight and Skylight Availability
Figure 39. The
LZ / DV
relation to simultaneous
GV / EV ratio
173
in case of December 1995 using
Bratislava 5-minute data.
If the LZ / DV ratio is taken into account for the Bratislava cases in Figure 38 and 39 several fact can be considered:
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
•
in the sunniest Bratislava August 2001 only few dark skies close to CIE Overcast Standard are characterized by LZ / DV > 0.32 while sunless cases occurred randomly under GV / EV or DV / EV lower than 0.2 (black dots in Figure 38). In contrary in the
•
dull December 1995 very many black dots in Figure 39 indicate the prevailing overcast conditions. The brighter overcast or cloudy skies occur in the area given by the range GV / EV over 0.2 and under 0.5 with LZ / DV < 0.3 in both figures.
•
The rising area of open circles documents the clear sky situations with sunshine which were dominant in August 2001 and represents also few sunny periods in December 1995. Note the horizontal placement around LZ / DV = 0.14 in Figure 39 caused by low winter solar altitudes.
4.7. The Relation of Sunlight and Skylight Availability under Sunny Conditions Analysis of one year and long-term data sets of five-minute averages of global and diffuse illuminance levels with simultaneous LZ / DV descriptors followed several hypothesis and questions:
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
174
Stanislav Darula and Richard Kittler a) Is the DV / EV ratio decreasing with the rise of the PV / EV ratio, i.e. does the diffuse sky illuminance drop with the increase of parallel sun-beam illuminance level? b) What is the DV / EV spread caused under different turbidity TV when turbidity is influencing the PV / EV ratio under various solar altitudes ? c) Is the DV / EV spread influenced by indicatrix and gradation differences, i.e. are sky patterns more important than simultaneous PV / EV ratios or momentary TV turbidity ? d) How can be simply interpreted or mathematically approximated these interrelations ? e) How intermingles the random mixture of sky types resulting in DV and GV courses
f)
during the day during different months or seasons under the interacting various solar altitudes, sunshine duration, cloudiness and weather conditions (e.g. fog, rain, snow etc) ? How influential are the non-homogeneous atmospheric conditions (e.g. cloudiness patterns, cloud type compositions, their placement on the sky vault, cloud movements etc) on DV / EV and PV / EV ratio changes in comparison with those under homogeneous sky standards ?
Hypothesis a) and b) are fully justified as shows Figure 36 where under very low PV / EV ratios the DV / EV ratios cover a rather large triangular spread area narrowing towards the PV / EV = 0.8 and DV / EV = 0.1 virtual intersection point. At the same time if
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
noticeable sunshine PV / EV = 0.1 is passed, then the DV / EV ratio is gradually dropping until the value 0.2 is reached. The analysis of the interdependence DV / EV and PV / EV ratios due to sky pattern and turbidity changes (Darula and Kittler, 2005) allowed to draw the relations expressed by eq. (472) as shown on Figure 40.
4.8. Minimal Skylight Availabilities under Overcast Skies without Sunlight The severely adverse daylight conditions in Northern regions of Europe were felt during wintertime a long time ago. Until the invention and adoption of artificial lighting in interiors daytime activities during short winter days were reduced considerably and also certain deprivation, dullness and depressions were occurring. Therefore the skylight illuminance level was studies with the aim to set the least criteria for window design or interior daylighting. Several outdoor diffuse illuminance levels as the lowest borderline were proposed like 3000 lx, 4000 lx or 5000 lx which were adopted in various European national standards. All were based on critical overcast skies prevailing in wintertime representing critical sunless skylight availabilities and were accepted for daylight calculations or prediction and evaluation methods and tools applying the Daylight Factor or Sky Component concept (Kittler, 2007).
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 40. Under sunny conditions the
DV / EV
175
ratio is influenced by turbidity and solar altitude.
Figure 41. Example of a day with the prevailing sky type 1.
It is real that overcast sky types in temperate climate zones are prevailing in autumn and winter and typical CIE Overcast Sky with the LZ / DV = 0.4083 occurs under densely multilayer Stratus cloudiness sometimes associated also with inversion fog. Such exemplary day in the Bratislava database was 11th November 1995 shown in Figure 41. Due to the slightly changing transparency of the cloud layers the DV / EV ratio documents some fluctuations different during the morning half-day and in the afternoon.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
176
Stanislav Darula and Richard Kittler
Figure 42. Occurrence frequency of the DV
/ EV
ratio under overcast skies measured in Bratislava.
Because the DV / EV ratio has a substantial role also for defining the absolute sky luminance via zenith luminance in eq. (470) a detail study of DV / EV occurrence frequency was done using five year measurement data. The selection in sky types followed the LZ / DV
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
narrow range strips around the theoretical curves and DV / EV relative frequency occurrence was determined as shown in Figure 42 for overcast sky types. Although in Bratislava the prevailing sky type was 2, quite many cases of sky type 1 form the winter daylight climate with exceptionally low DV / EV = 0.1 level. The mean and mode values are showing a gradual increase in DV / EV from sky type 1 to 5 indicating better atmospheric transparency and a shift from grey to whiter overcast skies. Note, that when no other information on the DV / EV ratio is available, then the recommended values in Table 2 can be used.
Conclusions The main reason for this chapter is to emphasize that in solar energy research and theory there are two equivalent spheres forming a twin system of quantities and units (Darula and Kittler, 2006) in interrelated broad band spectrum ranges: • •
spreading in the whole solar spectrum of radiation expressed usually in W/m2, i.e., within 0.5 to 15000 nm (Gueymard, 2004), extending only within the visible part of the spectrum, i.e., between 380 and 780 nm, where all quantities are weighted by the spectral sensitivity of the human eye, i.e., by
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
177
the relative luminous efficiency of monochromatic radiation for photopic vision V (λ ) , expressed in luminous units like lm/m2 or lx. Both these systems depend on the local availability of solar radiation utilizing either its energy in thermal terms or utilizing sunlight and skylight in terms of visual orientation, daylighting and non-visual stimulation of human bodies. Due to the human visual sensitivity restriction and selective efficiency with its maximum 683 lm/W at the wavelength 555 nm the light availability depends on: •
•
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
•
The type of the light source, i.e., whether it is fluently radiating as a high temperature source like the sun, an incandescent lamp or a fluorescent tube and a light emitting diode – LED radiating more or less effectively in spectral strips within the visible spectrum. If sunlight availability is to be considered, then very important is the solar altitude influencing illuminance levels and their restriction by shading clouds and reducing effect of atmospheric turbidity conditions during daytime at ground level. Sunlight availability is usually defined as illuminance by parallel sun beams on a horizontal plane in any location in a particular time within a day, i.e., during its daily course, in a particular month or season taking also in consideration local information of the relevant relative sunshine duration. When skylight availability at a particular location is to be evaluated on a horizontal plane illuminated by the whole unobstructed sky vault, then the momentary sky luminance pattern is defining the large hemispherical diffuse sky source. If the sun position is not shaded, the direct sunlight is enhanced by the very intensive sky luminance of the so-called solar corona. At the other extreme under the densely overcast sky conditions, the sky luminance has a rather uniform pattern with small or no azimuth differences and only its gradation from horizon to zenith is evident. Thus sky luminance patterns are formed both by sun beam scattering influences as well as gradation conditions caused by the atmospheric thickness and content.
Principally both sunlight and skylight availabilities at ground level locales are primarily predetermined by extraterrestrial availability on a fictitious horizontal plane just above a particular location. Momentary atmospheric transmittance is characterized by the ratio of sunlight and skylight ground level illuminance to the simultaneous extraterrestrial level. Due to very frequent illuminance changes in the local daylight climate it is very important to measure regularly both sunlight and skylight horizontal illuminance components or their global and diffuse levels. To identify also the frequent occurrence of sky luminance patterns at least regular zenith luminance measurements have to be added with time coordinated steps. Such three recordings ( GV , DV and LZ ) have to be at least measured at all meteorological stations for the specification of local daylight availability. Then in accordance with their time data the basic information can be derived as e.g.: • • •
momentary solar altitude and azimuth, extraterrestrial availability, atmospheric air mass,
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
178
Stanislav Darula and Richard Kittler
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
• • • • •
atmospheric transmittance, sunshine duration, sunlight availability, sky luminance pattern, skylight availability.
It is important to note that past and current irradiance data collected either at meteorological observatories or by satellites cannot substitute these regular light measurements because the luminous efficacy concept relating illuminance with irradiance levels in lm/W is questionable and not exact, due to irregular skylight spectrum shifts connected with different sky types as well as due to differences caused by influences of solar altitude and atmospheric condition changes. The lack of local daylight recordings in the current meteorological data is a critical insufficiency and is nowadays felt as a historical mistake. No daylight climate information on vast territories like North America, Africa or Asia is a serious deficiency when a sound base for the window design practice, photovoltaic utilities and energy saving managements have to be tested in their annual performance with their long-term economic and environmental consequences. The Daylight Committee of the International Commission on Illumination (CIE) initiated in 1983 the International Daylight Measuring Programme (IDMP) announced officially by its President Bodmann, 1991, as well as the representative of the World Meteorological Organization (WMO) Dr. Forgan, 1991. However, in spite of the WMO promise no meteorological station started this cooperation which should enable us to study daylight availability worldwide. Now data are available in different locales and climatic regions in more detail and complexity only thanks to daylight research teams (see http://idmp.entpe.fr ). Sunlight and skylight availability studies need to encompass all seasonal and annual courses to get closer to the reality of long-term daylight conditions. The examples and explanations in this chapter are based, due to copyright restrictions, only on European climate especially after Bratislava and Athens measured data. However these should serve as some hints to analyze and interpret regular momentary and locally gathered illuminance and luminance data not only to record their daily courses but also to disclose more basic interrelations that are generally valid world-wide. These are usually behind the natural lawfulness of daylight availability. The local differences can be specified only thanks to the research initiative in a few centers. Probably the most sophisticated collection of measured data was gathered in Hong-Kong (Li et al., 2004) or in Singapore (Wittkopf and Soon, 2007), where besides illuminance measurements also sky scans are recorded in 10-minute or quarterof-an-hour time steps. These allow us not only to follow daily courses or probable changes in basic daylight situations but also to specify sky patterns or sky types with their probability of frequency occurrence or to specify what sky type is locally dominant. Preliminary information on prevalent skies was published by Tregenza, 1999, for maritime climate. Anyhow there are still locales world-wide where specific sunlight and skylight availability have to be documented and explained for practical application and energy evaluation purposes based on the daylight reference year, as e.g., for Bratislava and Athens (Darula et al., 2004, Markou et al., 2007b).
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Sunlight and Skylight Availability
179
Information on sunlight and skylight availability is becoming quite necessary: • • •
•
for solving health and wellness problems in daily, seasonal and annual life circles and comfort due to circadian, seasonal and spaceship disorders or daily insufficiency, for glare and visibility studies and problems in densely built urban environments with mirror reflective facades or deep interiors and tunnel exits, for energy utilization means like solar collectors, photovoltaic panels and daylighting of interiors via windows or hollow light guides with needs for their annual effectiveness criteria or energy rating of buildings, for reducing of electric energy for artificial lighting in interiors.
Year-round typical daylight climate in various cities and regions is needed in monthly and seasonal changes specified in sunshine duration and sky type occurrence probabilities to evaluate environmental comfort or risks as well as energy conservation possibilities. Of course many of these tasks need also further studies and applications of outdoor sunlight and skylight to be predicted either on vertical or sloped surfaces outdoor or in interiors via windows, hollow light guides or via sunshade fixtures and devices influenced also by reflections from obstructing buildings, greenery or terrain. However, all future daylight applications and studies have to rely on local outdoor sunlight and skylight availability, which is of primary importance for expressing the frontiers of possible energy utilization and application effectiveness.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
References Bemporad, A. (1904). Mitteilung der Grossh. Sternwarte Heidelberg, (Transactions of the Mountain Star Observatory, Heidelberg), Vol.4. Bouguer, P. (1729). Essai d´Optique sur la gradation de la lumiér. Publ.. C. Jombert, Paris. (Reedit.Gautier-Villars, Paris, 1921). Bodmann, H. W. (1991). Opening of the CIE Conference by the President. Proc. 22nd Session CIE, Vol. 2, 3. Bouguer, P. (1760). Traité d´Optique sur la gradation de la lumiér. Paris.(Latin translation by J. Richtenburg, Vienna, Prague and Triest, 1762, Russian translation by N.A. Tolstoy and P.P. Feofilova with notes by Prof. A.A. Gershun, Publ. Academy of Sciences USSR, Moscow, 1950). CIE-Commission Internationale de l´Éclairage, 1955, Natural Daylight – Official Recommendation, Compte Rendu 13th Session CIE, 2, part 3.2, 2 – 4. CIE-Commission Internationale de l´Éclairage, 1973, Standardisation of luminance distribution on clear skies. CIE Publ. 22, CIE Central Bureau, Paris. CIE-Commission Internationale de l´Éclairage, 1990, CIE 1988 2º spectral luminous efficiency function for photopic vision. Publ. CIE 86-1990, CIE Central Bureau, Vienna. CIE-Commission Internationale de l´Éclairage, 1994, Guide to recommended practice of daylight mesurements. CIE Publ. 108, CIE Central Bureau, Vienna. CIE-Commission Internationale de l´Éclairage, 2003. Spatial distribution of daylight - CIE Standard General Sky. CIE Standard S 011/E:2003, CIE Central Bureau, Vienna.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
180
Stanislav Darula and Richard Kittler
Clear, R. (1982). Calculation of turbidity and direct sun illuminance. Memo to the Daylighting Group LBL, Berkeley, Calif., USA. Darula, S., Kittler, R., Kambezidis, H. & Bartzokas, A. (2004). Generation of a Daylight Reference Year for Greece and Slovakia. Final Report GR-SK 004/01, Part 1 and 2. Darula, S. & Kittler, R. (2005). New trends in daylight theory based on the new ISO/CIE Sky Standard: 1. Zenith luminance on overcast skies. Build. Res. J., 52, 3, 181-197. Darula, S. & Kittler, R. (2005). New trends in daylight theory based on the new ISO/CIE Sky Standard: 3. Zenith luminance formula verified by measurement data under cloudless skies. Build. Res. J., 53, 1, 9-33. Darula, S., Kittler, R. & Gueymard, Ch.A. (2005). Reference luminous solar constant and solar luminance for illuminance calculations. Solar Energy, 79, 5, 559-565. Darula, S. & Kittler, R. (2006). Twin system: Descriptors for the evaluation of illuminance and irradiance availability. Build. Res. J., 54, 3-4, 189-197. Darula, S. & Kittler, R. (2006). Outdoor illuminance levels in the tropics and their representation in Virtual Sky Domes. Archit. Sci. Review, 49.3, 301-313. Dumortier, D., Fontoynont, M. & Avouac-Bastie, P. (1994). Daylight availability in Lyon. Proc. European Conf. on energy performance and indoor climate in buildings, Lyon, 1315-1320. Forgan, B. W. (1991). The International Daylight Measurement Year. Proc. 22nd Session CIE, Vol. 2, 7. Gueymard, Ch. A. & Kambezidis, H. D. (1997). Illuminance turbidity parameters and atmospheric extinction in the visible spectrum. Q. J. R. Meteorol. Soc., 123, 679-697. Gueymard, Ch. A. (2004). The sun´s total and spectral irradiance for solar energy applications and solar radiation models. Solar Energy, 76, 423-453. IES Calculation Procedure Committee, (1984), Recommended practice for the calculation of daylight availability. Journ. Illum. Eng. Soc. of North Amer., 13, 4, 381-392, Igawa, N., Koga, Y., Matsuzawa, T. & Nakamura, H. (2004). Models of sky radiance distribution and sky luminance distribution. Solar Energy, 77, 2, 137-157. ISO-International Standard Organisation, 2004. Spatial distribution of daylight - CIE Standard General Sky. ISO Standard, 15469, 2004. Kasten, F. & Young, A. T. (1989). Revised optical air mass tables and approximation formula. Appl. Optics, 28, 22, 3735-4738. Kittler, R. (1967). Standardisation of the outdoor conditions for the calculation of the Daylight Factor with clear skies.. Proc. Sunlight in Buildings, Bouwcentrum Rotterdam, 273-286. Kittler, R. (1985). Luminance distribution characteristics of homogenous skies: a measurement and prediction strategy. Light. Res. and Technol., 17, 4, 183 – 188. Kittler, R. (1993). Relative scattering indicatrix: Derivation from regular radiance/luminance sky scans. Light. Res. and Technol, 25, 3, 125-127. Kittler, R. (1995). Alternative possibilities to define the zonal daylight climates on the sunshine duration basis in Europe. Availability of daylighting in Europe and design of a daylighting atlas. Final Report Jou2-CT92-0144, Vol. 2, Appendix 6, Athens. Kittler, R., Darula, S. & Perez R. (1998). A set of standard skies characterizing daylight conditions for computer and energy conscious design. Final Report of the SK-US grant project 92 052, Bratislava – Albany.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Sunlight and Skylight Availability
181
Kittler, R. & Darula, S. (2006). The method of aperture meridians: a simple calculation tool for applying the ISO/CIE Standard General Sky. Light. Res. and Technol., 38, 2, 109122. Kittler, R. (2007). Daylight Prediction and Assessment: Theory and Design Practice. Archit. Sci. Review, 50, 2, 94-99. Kocifaj, M. & Darula S. (2002). ModelSky – jednoduchý nástroj pre modelovanie rozloženia jasu na oblohe. (ModelSky – a simple tool for modeling the luminance distribution on the sky). Meteorological Bulletin, Prague, 55, 4, 110-118. Kocifaj, M., Darula, S. & Kittler, R. (2008). HOLIGILM: Hollow light guide interior illumination method – An analytic calculation approach for cylindrical light-tubes. Solar Energy, 82, 3, 247-259. Krat, V. A. (1943). Indikatrisa rassayaniya sveta v zemnoy atmosphere. (Indicatrix of light diffusion in the earth atmosphere. Astronom. J., 20, 5-6. Lambert, J. H. (1760). Photometria sive de mensura et gradibus luminis, colorum et umbrae, Augsburg, (German translation by E. Anding, Klett Publ., Leipzig, 1892). Lambert, J. H. (1773). Merkwűrdigste Eigenschaften der Bahn des Lichts durch die Luft und űberhaupt durch verschiedene sphärische und concentrische Mittel. (Notable qualities of the light path through the air and mainly through various spherical and concentric particles). German translation from the French origin. Haude – Spener, Berlin. Li, D. H. W., Lau, Ch.C. S. & Lam, J. C. (2004). Journ. Solar Energy Eng., 126, 8, 957-964. Linke, F. (1922). Transmission Koefizient und Trűbungsfactor. Beitr. Phys. Frei Atmosf., 10, 90. Makhotkin, L. G. (1960). Ekvivalent massy Bemporada (Equivalent of the Bemporad´s mass). Trudy Glav. Geofyz. Observ. (Transac. of the Main Geophyzical Observatory), No.100, 15-16. Markou, M. T., Bartzokas, A. & Kambezidis, H. D. (2007a). A new statistical methodology for classification of sky luminance distribution based on scan data. Atmospheric Research, 86, 261-277. Markou, M. T., Kambezidis, H. D., Bartzokas, A., Darula, S. & Kittler, R. (2007b). Generation of daylight reference years for two European cities with different climate: Athens, Greece and Bratislava, Slovakia. Amospheric Res., 86, 3-4, 315-329. Moon, P. & Spencer, D. E. (1942). Illumination from a non-uniform sky. Illum. Emg., 37, 10, 707-726. Navvab, M., Karayel, M., Ne´eman, E. & Selkowitz, S. (1984). Analysis of atmospheric turbidity for daylighting calculations. Energy and Buildings, 6, 3, 293-303. Oki, M., Nakamura, H., Rahim, M. R., Shin, I. & Iwata, T. (1991). Relative sunshine duration at various points in the world. Proc. CIE 22nd Session, Vol. 1, Part. 1, p. 7-8 + handouts. Perez, R., Seals, R. & Michalsky, J. (1991). An all-weather model for sky luminance distribution. Solar Energy, 50, 3, 235-245. Petherbridge, P. (1955). The brightness distribution of the overcast sky when the ground is snow-covered. Quart. J. of R. Meteorol. Soc., 81, 349, 476 – 477. Pierpoint, W. (1982). Recommended practice for the calculation of daylight availability. US IES Daylight Guide, Draft published in Journ., IES NA, 1984, 13, 4, 381-392. Powell, G. L. (1982). The ASHRAE Clear Sky Model – an evaluation. ASHRAE Journal, 24, 11, 32-34.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
182
Stanislav Darula and Richard Kittler
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Rayleigh–Strutt, L. M. (1899). On the transmission of light through an atmosphere containing small particles in suspension and on the origin of the blue sky. Philos. Mag., 47, 5, 375-384. Roy, G. G., Kittler, R. & Darula, S. (2007). An implementation of the Method of Aperture Meridians for the ISO/CIE Standard General Sky. Light. Res. and Technol., 39, 3, 253- 264. Ruck, N. C. & Kittler, R. (1987). Evaluation of the Linke and Illuminance turbidity factors for homogeneous skies. Proc. 21st CIE Session, Venice, 1, 242-243. Smith, F. & Wilson, C. B. (1976). The shading of ground by buildings. Build. and Environ., 11, 3, 187-195. Tregeza, P. R. & Sharples, S. (1993). Daylighting algorithms. School of Architectural Studies Univ. of Sheffield. Tregenza, P. R. (1999). Standard skies for maritime climates. Light. Res. and Technol., 31, 3, 97-106. Tregenza, P. R. (2004). Analysisng sky luminance scans to obtain frequency distribution of CIE standard general skies. Light. Res. and Technol., 36, 4, 271-281. Wittkopf, S. K. (2004). A method to construct Virtual Sky Domes for use in standard CADbased light simulation software. Archit. Sci. Review, 47, 2, 275-286. Wittkopf, S. K. & Soon L. K. (2007). Analysing sky luminance scans and predicting frequent sky patterns in Singapore. Light. Res. and Technol., 39, 1, 31-51. WMO-World Meteorological Organisation, Commission for Instruments and Methods of Observation, (1982), Abridged Final Report of the VIII Session in Mexico City. WMO Document No. 590.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
In: Advances in Energy Research, Volume 1 Editor: Morena J. Acosta, pp. 183-201
ISBN: 978-1-61668-994-0 © 2010 Nova Science Publishers, Inc.
Chapter 5
THE INEVITABILITY OF CONTINUING GLOBAL ANTHROPOGENIC ENVIRONMENTAL DEGRADATION G.P. Glasby* Russian Academy of Sciences, Apatity, Murmansk Region, Russia
‘Anyone who believes exponential growth can go on forever in a finite world is either a madman or an economist’ Kenneth Boulding ‘Civilization occurs by geological consent, subject to change without notice’ Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Will Durrant
Abstract In this paper, growth rates of world population, world Gross Domestic Product (GDP) and total wealth created for the preceding 10,000 years have been calculated and extrapolated though the 21st Century based on various scenarios in order to assess the potential environmental impact of increasing world population and consumption throughout this century. The results demonstrate that between 8 and 26 times more wealth will be created in the 21st Century than in the whole of the preceding human history depending on assumptions regarding the growth rates of world population and world GDP. These calculations show for the first time the unprecedented increase in resource consumption that will occur in the 21st Century compared with that in the preceding 10,000 years of human history. This increase will result in a massive environmental deficit by the turn of the century and implies that we are on course to overwhelm the natural environment on which we depend for our tenure of this planet within this century. This will pose severe problems for the enhanced world population anticipated later this century. In this situation, it will be necessary to moderate our lifestyles in an attempt to achieve a more sustainable development of the environment. Unless vigorous steps are taken to curtail population growth, resource consumption and global CO2
*
E-mail address: [email protected] Russian Academy of Sciences, 14 Fersman Street, Apatity, Murmansk Region, 184209 Russia. (Corresponding author)
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
184
G.P. Glasby emissions, human prospects for the 21st Century and beyond do not look particularly encouraging.
Keywords: Exponential growth Wealth creation 21st Century, Environmental impact Sustainable development.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
1. Introduction 1820 is a key date in human history (Glasby 2002). It marks the beginning of a huge increase in world population from about one billion in 1820 to almost 6.3 billion in 2000. This has been accompanied by an increase in world GDP of 180 fold as a result of industrialization and the large-scale utilization of fossil fuels. As a consequence, we are putting increasing stress on the environment and moving steadily away from the ideal of sustainable development as envisaged in the Brundtland Report (World Commission on Environment and Sustainability 1987). The 21st Century represents a critical juncture in the history of mankind. Either we continue the exponential growth in world population and world GDP that has characterized the past 190 years or we modify our lifestyle in an attempt to reduce our already considerable impact on the environment. There is already clear evidence that we live in a markedly non sustainable world and that this trend to non sustainability will increase markedly over the coming decades unless we make major changes in our patterns of consumption and attitudes to the environment (Glasby 1995, 2002, 2006). Crutzen and Stoermer (2000) have defined the present geological epoch as the Anthropocene to emphasize the central role of mankind in geology and ecology. Crutzen and Steffen (2003) subsequently divided the Anthropocene into three phases. The first phase beginning sometime between 8000 and 5000 years B.P. was the result of long-term emissions of CO2 and CH4 from forest clearance for agriculture and animal husbandry (Ruddiman 2003). The second phase began with the industrial revolution and was thought to have started in 1784 with the development of the steam engine. The third phase began in 1950 and marks the period in which human activities advanced from influencing the global environment in some ways to dominating it in many ways. In this article, world population, world GDP, total wealth created and atmospheric CO2 over the 21st Century have been calculated based on various assumptions of growth rates in world population and per capita GDP in an attempt to assess the possible extent of anthropogenic impacts on the environment in the future.
2. Methods In order to calculate the possible impact of the human population on the environment, I have listed three parameters, world population, world per capita GDP and world GDP, at a 5000 year interval from 8000 B.C. to 3000 B.C., at 500 year intervals for the period from 3000 B.C. to 1,500 A.D. (Table 1) and at 50 year intervals for the period from 1500 to 2000 (Table 2). De Long (1998) has described the methods for calculating these parameters in some detail. From these data, it is possible to calculate the growth rate of each of these
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Inevitability of Continuing Global Anthropogenic Environmental Degradation
185
parameters for each of these 5,000, 500 and 50 year intervals. The total wealth created in each interval could then be derived by summing the annual world GDP for each interval. This enabled the growth rate of the total wealth created for each interval to be calculated (Tables 1 and 2). In making these calculations, I have roughly followed the divisions of Crutzen and Steffen (2003) for the Anthropocene. In order to make similar calculations for the period from 2000 to 2100, it is assumed that the world population will grow from 6.272 billion in 2000 to a median value of 8.4 billion in 2100 with a probable range of 5.6 to 12.1 billion (Lutz et al. 2001). This corresponds to an average growth rate of the median world population of 0.29% p.a. during the 21st Century. Increases in world GDP were calculated based on three scenarios, firstly on the assumption of Keynes (1931) that world GDP will naturally increase by factors of 4 and 8 per century (see later) and secondly on the assumption that world per capita GDP will continue to grow at the rate experienced between 1950 and 2000 (2.83% p.a.). This gives an overall growth rate for world GDP during the 21st Century of 3.12% p.a. This projection can be taken to represent the business as usual scenario. On this basis, the growth rates in world GDP based on these three scenarios could be calculated to be 1.4% p.a., 2.1% p.a. and 3.1% p.a. for the entire century, respectively. It was then possible to calculate the total wealth created for the periods from 2000 to 2049 and from 2050 to 2099 by summing the annual world GDP for each of these two periods based on these three scenarios (Table 2). The results of these calculations are presented graphically in Fig. 1a-d. The atmospheric CO2 concentration is listed at 50 year intervals for the period 1750-2000 and the rate of increase of atmospheric CO2 has been calculated at 50 year intervals (Table 2). The atmospheric CO2 concentrations in 2050 and 2100 were derived by extrapolation of the rate of increase of the CO2 concentration from 1950 to 2000 (0.34% p.a.).
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
3. Results Various methods have been devized to quantify adverse environmental impacts on a global scale. One involves an equation of the form Environmental Impact = Population x Per Capita GDP x Impact/GDP Using this equation, it is possible to quantify the global environmental impact in terms of world GDP (or total wealth created in a given unit of time) and the environmental impact per unit of GDP. This latter term depends on many factors such as the amount of waste produced per unit of economic activity, the efficiency of economic activity and the technologies used in production (Ehrlich et al. 1977). Since this term is not readily quantifiable, I have defined Environmental Impact solely in terms of total wealth created per unit of time on the understanding that this is an approximation and that the analysis is therefore only semi quantitative. However, others have used a similar approach with some measure of success (e.g. Ehrlich and Holdren 1971, Diamond 2005).
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
186
G.P. Glasby
Figure1. Plots showing the increase of a. world population, b. per capita GDP, c. GDP and d. total wealth created from 1500 to 2000 and the projected rates of increase in these parameters from 2000 to 2100. Calculated increases in per capita GDP, GDP and total wealth created from 2000 to 2100 are based on the three scenarios described in the methods section. The data are taken from Table 2. All $ values are given in constant 1990 $US
3.1. The Past The period from 8000 B.C. to 1500 A.D. marks the period from the dawn of civilization to the beginning of the modern phase of European history. For this period, DeLong (1998) assumed a constant world per capita GDP of $115 (in constant 1990 $). This value is taken as a measure of per capita agricultural output in pre-industrialized agricultural societies and reflects mankind’s seemingly unending bondage to the soil. For the period from 8000 B.C. to 3000 B.C., the growth rates in world population and GDP were 0.02% p.a. This figure of 0.02% p.a. can be considered the natural growth rate of the human population before the advent of civilization. The total wealth created in this period
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Inevitability of Continuing Global Anthropogenic Environmental Degradation
187
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
was $3.48 trillion. It represents 0.3% of the total wealth created in the subsequent 5000 years of our history. For the period from 3000 B.C. to 1500 A.D., growth rates in world population and world GDP increased to 0.03-0.14% p.a. and the total wealth created in this 4500 year period was $65.9 trillion (Table 1) which is equivalent to that created between 1500 and 1890 (Table 2). It should be noted, however, that growth rates in world population did not increase systematically during this period reflecting the vulnerability of the human population to the vagaries of life such as war, famine, pestilence and death (the four horsemen of the apocalypse). Significantly, the highest growth rates (0.11-0.14% p.a.) occurred during the classical period and did not reach these levels again until the Renaissance. Although these growth rates were modest by future standards, it would be wrong to think that man’s impact on the environment was negligible during this period (Goudie 2000, Glasby 2002). For the period from 1500 to 1750, world population increased by 0.2% p.a. and the world GDP by 0.4% p.a. (Table 2). These growth rates were again modest by future standards. The total wealth created during this period was $21 trillion. The period from 1750 to 1950 was marked by steady increases in the annual rate of increase in world population, per capita GDP and world GDP through each of the 50-year intervals in this period. World population and world GDP increased by 1.6 and 2.6 times respectively between 1750 and 1850 and by 2.1 and 11.3 times respectively between 1850 and 1950. However, the rate of increase of per capita GDP was somewhat lower between 1900 and 1950 than might have been expected based on extrapolation of the rate from 18501900. This was probably the result of the two world wars. The total wealth created during these two periods was $21 trillion and $139 trillion, respectively. Table 1. Listing of world population, world per capita GDP and world GDP at a 5000 year interval from 8000 B.C. to 3000 B.C and at 500 year intervals for the period from 3000 B.C. to 1,500 A.D. based on data taken De Long (1998). The methods for calculating the total wealth created and the exponential growth rates in each time interval are given in the methods section
Year
World Pop. (million)
Percent increase p.a.
Per capita GDP ($)
Percent increase p.a.
World GDP ($US billions)
Percent increase p.a.
Total wealth created ($US trillions)
-8000
4.5
0.02%
115
0
0.52
0.02%
-3000
14
0.02%
115
0
1.6
0.02%
3.5
-2500
19
0.06%
115
0
2.2
0.06%
1.0
-2000
27
0.07%
115
0
3.1
0.07%
1.3
-1500
38
0.07%
115
0
4.4
0.07%
1.9
-1000
50
0.05%
115
0
5.7
0.05%
2.5
-500
100
0.14%
115
0
11.5
0.14%
4.2
0
170
0.11%
115
0
19.5
0.11%
8.1
500
195
0.03%
115
0
22.4
0.03%
10.7
1000
265
0.06%
115
0
30.5
0.06%
12.7
1500
425
0.09%
115
0
48.9
0.09%
20.0
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
188
G.P. Glasby
From 1950 to 2000, world population and world GDP increased by 2.5 and 10.0 times respectively (Table 2). These increases were far higher than anything that had been seen before. Even allowing for the somewhat lower baseline in 1950 as a result of the two world wars, this was a period of remarkable growth. In addition, the total wealth created during this 50 year period was $847 trillion which is equivalent to 3.4 times that created in the preceding 10,000 years.
3.2. The Future From the data presented in Table 2, it is possible to calculate the increase in total wealth that will be created in the 21st Century relative to the total wealth created by humans during their entire history based on the growth rates in the each of the three scenarios mentioned previously and this leads to a remarkable conclusion. In the 21st Century, we have the potential to create 8.1 times, 12.5 times or 25.5 times more wealth than has been created in the entire human history to date depending on whether we increase world GDP by factors of 4, 8 or 21.6 during this period. In principle, these increases in wealth could continue into the 22nd Century and beyond assuming no unforeseen crises. These calculations are a stark reminder of the power of exponential growth. The calculations of the total wealth created in the 21st Century are based on the assumption that world GDP will increase by 1.4% p.a., 2.1 % p.a. or 3.1% p.a. These growth rates are in the range of what might realistically be expected for this period.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
4. Atmospheric CO2 The atmospheric CO2 concentration decreased by 3.5% from 1500 to 1750 but then increased by 3.3% from 1750 to 1850, by 5.1% from 1850-1950 and by 18.1% from 1950 to 2000 (Table 2). Significantly, the increase in atmospheric CO2 concentration began in about 1780, about 40 years before the turning point in the increase in world population (Etheridge et al. 1996). This was probably the result of forest clearance for agriculture in the New World. Overall, the average increase in atmospheric CO2 for the period from 1750 to 1950 was only 0.06% p.a. but this increased markedly to 0.34% p.a. for the period 1950-2000. For the period from 2000 to 2100, atmospheric CO2 concentration was calculated to increase from 367 p.p.m. in 2000 to 434 p.p.m. in 2050 and 513 p.p.m. in 2100 based on a rate of increase of 0.34% p.a. (Table 2). However, recently released data show that the atmospheric CO2 concentration increased by 2.1 p.p.m. p.a. on average between 2002 and 2006 compared with an average increase of 1.5 p.p.m. p.a. for the last few decades (Tans 2007). If the atmospheric CO2 concentration were to increase at this rate for the rest of this century, it would attain a value of 576 p.p.m. in 2100.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Table 2. Listing of world population, world per capita GDP and world GDP at 50 year intervals for the period from 1500 to 2000 based on data taken De Long (1998). Atmospheric CO2 concentrations from 1750 to 1950 are taken from Etheridge et al. (1996) and for 2000 from Tans (2007) and Tans and Conway (2007). Extrapolation of the data for the period 2000 to 2100 is based on three scenarios for increases in the growth rates in world GDP of 1.4% p.a., 2.1% p.a. and 3.1% p.a., respectively, as described in the methods section.
Year
World Pop. (millions)
Percent increase p.a.
Per capita GDP ($)
Percent increase p.a.
World GDP ($US billions)
Percent increase p.a.
Total wealth created ($US trillions)
115
49
127
0.20%
61
0.45%
3
1500~1549
140
0.20%
76
0.45%
3
1550~1599
1500
425
1550
481
0.25%
1600
545
0.25%
1650
Percent increase 50a-1
Atmos. CO2 p.p.m.
Percent increas e p.a.
285
25%
545
0.00%
155
0.20%
85
0.20%
4
1600~1649
18%
1700
610
0.23%
172
0.21%
105
0.43%
5
1650~1699
17%
720
0.33%
190
0.20%
137
0.53%
6
1700~1749
27%
275
1750 1800
900
0.45%
210
0.20%
189
0.65%
8
1750~1799
34%
288
0.09%
1850
1200
0.58%
300
0.72%
360
1.30%
13
1800~1849
64%
284
-0.03%
1900
1625
0.61%
679
1.65%
1103
2.26%
31
1850~1899
137%
295
0.08%
1950
2516
0.88%
1622
1.76%
4082
2.65%
108
1900~1949
247%
310
0.09%
2000
4×1950-99
6272
1.84%
6539
2.83%
41017
4.72%
847
1950~1999
684%
367
0.34%
2050
7258
0.29%
11302
1.10%
82033
1.40%
2938
2000~2049
247%
2100
4×1950-99
8400
0.29%
19532
1.10%
164067
1.40%
5877
2050~2099
100%
2050
8×1950-99
7258
0.29%
15983
1.80%
116013
2.10%
3569
2000~2049
321%
2100
8×1950-99
8400
0.29%
39064
1.80%
328134
2.10%
10095
2050~2099
183%
2050
Extrapolated
7258
0.29%
26260
2.83%
190594
3.12%
4944
2000~2049
484%
434
0.34%
2100
Extrapolated
8400
0.29%
105432
2.83%
885627
3.12%
22972
2050~2099
365%
513
0.34%
190
G.P. Glasby
This suggests that the projected atmospheric CO2 concentrations in 2050 and 2100 presented in Table 2 may be considerable underestimates of the actual concentrations that will be reached if no effort is made to mitigate CO2 emissions and clearly show that atmospheric CO2 emissions are continuing to increase rapidly rather than stabilizing as hoped. The observation of Philipona et al. (2005) that the presence of water vapour in the atmosphere is enhancing global warming in parts of Europe is an additional concern. For comparison, projections by the Intergovernmental Panel of Climate Change (IPCC) indicate that atmospheric CO2 concentrations in 2100 will lie between 540 and 970 p.p.m. On this basis, globally averaged surface temperatures are projected to rise by between 1.7 and 4.2°C and mean sea level by between 0.11 and 0.77 m between 1990 and 2100 (Anon 2001).
5. Discussion
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
5.1. The Views of John Maynard Keynes John Maynard Keynes was the preeminent economist of the 20th century. Keynes (1931) was able to demonstrate that the accumulation of capital in conjunction with technical change would lead to an increase in the living standards in the progressive countries of between 4 to 8 times per century based on the power of compound interest. For this purpose, he dated the modern age as beginning in the sixteenth century with the accumulation of capital at compound interest. Technical change on the necessary scale had to await the industrial revolution. Prior to that, living standards of the average man had been largely governed by agricultural output. As shown above, world GDP increased by 11.3 times between 1850 and 1950 and by 10.0 times between 1950 and 2000 confirming the prediction of Keynes. Assuming no major wars and no major increases in population, Keynes predicted that the economic problem would be solved, or be within sight of a solution, within 100 years. The economic growth rates that have been achieved since 1850 mean that we have now unlocked the secret of increasing wealth on an exponential basis. Keynes was, of course, writing at the time of the Great Depression. He saw this development as entirely benign. However, he imagined a future quite different from the one we have chosen; one in which we would work a 15 hour week where the major problem would be how to occupy one’s leisure. In this, Keynes made four assumptions, that we would have the power to control population, that we would have the determination to avoid wars and civil dissentions, that we would entrust to science the direction of those matters that are properly the concern of science. If these three assumptions were to hold, he considered that the rate of accumulation of wealth as fixed by the margin between our production and our consumption would easily look after itself. Despite our failure to control population and to avoid wars, the views of Keynes on the creation of wealth have stood the test of time. However, what Keynes saw as entirely benign, the permanent solution of the economic problem, namely the struggle for subsistence, we can see in a different light, namely the overpowering of the natural environment by our excessive population growth and consumption of resources.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
The Inevitability of Continuing Global Anthropogenic Environmental Degradation
191
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
5.2. The Future Imperfect We now live in a world of intense global competition characterized by maximizing the use of assets and mobility of capital. The principal aim of economic activity is to maximize growth rates in order to create wealth and thereby increase human well being. In the advanced countries, this is achieved by creating demand (or increasing consumption). Growth is considered critical because it can alleviate material scarcity, increase employment and eliminate poverty. However, one of the paradoxes of this emphasis on growth based on intense economic competition is the widening gulf in wealth between the richest and poorest both within and between countries. In economics, everything must have a value (or price) and therefore be quantifiable. The natural world is considered to have no value per se except in the sense that it contributes to human well being. However, this simple-minded view is readily open to question if we were to consider how we would survive in Antarctica (or on Mars) if we were cut off from external sources of supply. This example clearly shows that the natural world is the buffer without which we could not hope to survive on this planet. Constanza et al. (1997) have confirmed this observation by demonstrating that, far from being without value, the natural world or, as they put it, the world's ecosystem services and natural capital actually contribute more to human welfare ($US16-54 trillion) than the global GDP ($US18 trillion). This remarkable conclusion comes at a time when we are putting great stress on the natural environment and when there are authoritative predictions showing that this stress will increase markedly over the course of the present century and beyond. It demonstrates beyond doubt that we must put much greater emphasis on maintenance of our ecological support systems if we consider the long-term well being of the world population to be our major priority. This goal may be considered to be analogous to sustainable development. Perhaps the prime example of the detrimental effects of environmental excesses at present is Haiti where the combination of rapid population growth (60% in last 20 years), intense deforestation (98% loss) and increased vulnerability to tropical storms has left the island destitute. This is merely a forerunner of what will happen on a much larger scale if the warnings of Constanza et al. (1997) are ignored.
6. Exponential Growth Rates The preceding sections have emphasized the role of exponential growth in the increase of world population and world GDP from 1750 onwards. From these data, it is clear that the projected growth rates in world GDP are completely unsustainable even in the comparatively short term of 50 years. This conclusion is in agreement with the previous findings of Goodland et al. (1991) that anything remotely resembling the magnitude of 5 to10-fold increase in global economic activity over the next 50 years as proposed in the Brundtland report would simply speed us from today’s long run unsustainability to imminent collapse of the global ecosystem and of Ehrlich and Ehrlich (1990) that the planet could not support a quintupling of economic activity even for a short time. The fact that we have the potential to create between 8 and 26 times more wealth in the st 21 Century than has been created during the preceding 10,000 years of human history suggests that we are far removed from the transition to sustainability within two generations
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
192
G.P. Glasby
as argued by the National Research Council (1999) but rather are on course to overwhelm the natural environment on which we depend for our tenure of this planet within this century. Under these circumstances, it seems quite possible that there will be a sharp decline in world population to more sustainable levels at some time in the future. The exponential growth rates for world population and world GDP calculated here, particularly from 1950 onwards, clearly show that the concept of sustainable development is a chimera, meaningless in the context of our time. The idea that economic growth leads to environmental degradation and inequalities in wealth was clearly demonstrated by Daly (1978). Gorgescu-Roegen (1971) has also argued that we live off the capital of low entropy. As a result, consumption of resources leads to an increase in entropy in the global system which I have interpreted as environmental degradation (Glasby 1988, 1995). Other authors have argued that economic growth can lead to increased environmental protection because the increased wealth enables countries to reduce their rates of population growth and impact on the environment by increasing investment in environmental protection measures (e.g. Simon 1998). Although this argument has a superficial appeal, it is flawed because the increase in entropy created globally by consumption of resources will always be greater than the increased order in the environment created locally by increased spending on environmental protection in accordance with the Second Law of Thermodynamics. This is a fundamental conclusion which is in accord with the large number of negative environmental impacts reported nowadays and is broadly supported by the findings of Strand (2002).
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
7. Atmospheric CO2 Life on Earth is dependent on the Earth being a dynamic planet with an equitable temperature regime and a regulated atmosphere (Lovelock 2006). If the atmospheric CO2 concentration should attain values of 513 or 576 p.p.m. in 2100 as previously calculated, we will be going well outside the envelope of greenhouse gas concentrations which have characterized human evolution. Although the resultant global warming may be tolerable in itself, it will disturb the energy balance of both the atmosphere and oceans with markedly detrimental effects (Glasby 2006). As such, it could significantly affect the carrying capacity of the Earth for an advanced industrial population critically dependent on agricultural production to feed itself. This will have severe implications at a time when the human population will be at its highest level ever and economic activity well outside the limits required for the sustainable development of the environment. In a separate publication, I have shown that the high atmospheric CO2 concentrations estimated for 2100 are on course to lead to the loss of the world’s coral reefs and major ice sheets with the possibility of shutting down of ocean thermohaline circulation at some time further into the future (Glasby 2006). I also calculated that consumption of the total global hydrocarbon reserves would increase the atmospheric CO2 concentration to about 2,200 p.p.m. As a consequence, it is not possible to utilize more than 20% of the global hydrocarbon reserves without an accompanying massive programme for the sequestration of CO2 if we do not wish to risk shutting down the thermohaline circulation and causing a major environmental catastrophe. These findings clearly show that dealing with the global CO2 problem will be the major task of the 21st Century. Rees (2006) has recently urged the
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
The Inevitability of Continuing Global Anthropogenic Environmental Degradation
193
adoption of a programme analogous to the Manhattan or Apollo projects in order to reduce the impacts of greenhouse gas emissions on global climate change. In spite of this, there appears to be no consensus on the need to reduce economic growth in order to reduce concentrations CO2 in the atmosphere in the future. At the 2005 G8 summit of energy and environment ministers in London, the then British Prime Minister Tony Blair said that no country would want to sacrifice economic growth in order to meet the challenge of climate change but that all economies know that the only sensible, long-term way to develop is to do it on a sustainable basis (Anon 2005). In particular, he stressed the role of the private sector in developing the science and technology to solve this problem. Ruddiman (2005), on the other hand, has argued that ‘draconian economic sacrifices’ will be needed to avert the major impacts of global warming. Because of our failure to curb CO2 emissions, it seems unlikely that we can hold atmospheric CO2 concentrations to 450 p.p.m. but will have to settle for a concentration of 550 p.p.m. if this is still possible. Such a high CO2 concentration will induce major changes in climate which will have dramatic effects on the environment as recently outlined by Hansen et al. (2007). Future generations will pay a heavy price for our failure to control climate change. Introduction of major new technologies such as solar power and nuclear fusion could provide abundant energy without the corresponding CO2 emissions but time is running out to introduce them before global warming induces irreversible environmental changes. Particularly important is the observation of Lenton et al. (2008) that a variety of tipping elements involving nonlinear switches in the state or modes of variability of components of the climate system could reach their critical point within this century under the influence of global climate change. The greatest threats are tipping the Arctic sea ice and the Greenland ice sheet. This factor needs to be taken into account in devising future strategies on climate change. Meinshausen et al. (2009) have also devised a strategy for limiting global warming to 2°C using greenhouse-gas emission targets by keeping atmospheric CO2 levels at less that 400 p.p.m. These authors have calculated that less than half the proven economically recoverable oil, gas and coal reserves could still be emitted up to 2050 to remain within this limit. However, Lynas (2008) has shown that an atmospheric CO2 level of 400 p.p.m. will actually be attained in 2016. Particularly disturbing is the claim by Rapley (2005) that the Antarctic Peninsula has warmed by about 2.5oC in the last five years and that this is having dramatic effects on the ice sheets in this region. If so, this will lead to very serious environmental problems in the future.
8. Prospects for Societal Collapse Diamond (2005) has recently drawn attention to historical examples where local populations have collapsed as a result of increasing impacts on the environment such that the local environment could no longer support the increased population even after several centuries of successful occupation. The question is whether such collapses are still possible in the 21st century and beyond. For this purpose, four types of societal collapse can be considered.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
194
G.P. Glasby •
•
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
•
•
natural catastrophes such as meteorite impacts or volcanic eruptions. Examples of the latter include the Krakatoa eruption of 535-536 A.D. in which about 100 km3 of magma were erupted and which heralded the dark ages in Europe (Keys 2000) and the Laki and Grimsvötn eruptions in 1783-1785 which caused cooling of the northern Hemisphere by as much as 1°C and the death of 9,000 Icelanders (Thordarson and Self 1993). Events of this magnitude are rare. technological collapse as a result of over dependence on centralized power supplies or communications technologies in a globalized world. environmental collapse of human societies such as occurred on Easter Island, Pitcairn and Henderson Islands and of the Anasazi and Maya societies of North America and the Norse colony on Greenland. These societies all existed for several hundred years before their ultimate demise. Diamond (2005) has listed the factors leading to the collapse of these societies: deforestation and habitat destruction, soil problems (including soil erosion, salinization and soil fertility losses), water management problems, over hunting, over fishing, effects of introduced species on native species, human population growth and increased per capita impact of people. The last two factors approximate to the environmental impact as defined earlier and may therefore be considered to be roughly equivalent to the sum of the previous six factors. For future collapses of human societies, Diamond (2005) added four additional factors: global climate change, build up of toxic chemicals in the environment, energy shortages and complete utilization of the Earth’s photosynthetic activity by humans. In each of the historical societal collapses, the fundamental problem was that population growth outstripped available resources as foreseen by Thomas Malthus in 1798. The historical examples differ from the present situation in occurring in almost closed systems in contrast to the interconnected world in which we now live. However, we live on a single planet which in itself can be considered to be a closed system so that this difference can be considered to be merely a problem of scale. violent conflict was the terminal phase in the demise of each of the historical societal collapses described by Diamond (2005) as desperate people fought over the last remaining resources.
Global environmental impact has previously been defined in terms of the total wealth created at a given time multiplied by the environmental impact per unit of GDP which is generally unknown and must therefore be considered to be a constant as an approximation. On this basis, it can be calculated from the data given in Table 1 that the global environmental impact will be roughly 3.5, 4.2 and 5.8 times greater in 2050 than in 2000 and roughly 7.0, 11.9 and 27 times greater in 2100 than in 2000 based on the assumptions about the growth of global GDP as described earlier. These figures demonstrate unequivocally that environmental impacts will increase throughout this century and will be at least an order of magnitude greater in 2100 than at present. However, the values calculated above are only average values. Global environmental impact will be greatest in fragile environments where population growth has been the greatest. Some regions will be able to minimize environmental degradation by importing environmentally sensitive products from elsewhere thereby exporting environmental degradation to other countries. However, environmental degradation will increase overall. It will be most marked on local- and regional-scales and will give rise to knock on effects. As
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
The Inevitability of Continuing Global Anthropogenic Environmental Degradation
195
one region becomes subject to severe environmental degradation, part of the population will migrate to more favoured regions exacerbating the environmental problems there (Myers 2002). It therefore seems likely that most of the fragile areas of the Earth’s surface will be severely degraded by 2100. Based on the calculations presented here, it becomes clear that there will be a massive environmental deficit by the turn of the century. This view is supported by Diamond (2005) who claims that he has never met anybody who seriously argues that the world could support 12 times its current impact but this is what we can expect in 2100. This conclusion is supported by recent findings of the Secretariat of the Convention on Biological Diversity (2006) that the global demand for resources now exceeds the biological capacity of the Earth to renew them by 20%. This compares with the situation in 1960 when mankind used only about one half of the Earth’s biocapacity. Global climate change will play a major role in determining the magnitudes of environmental impacts by amplifying existing hazards. For example, the German insurance company Munich Re has shown that weather-related natural catastrophes such as windstorms, floods, severe weather events, heat waves and forest fires have a disproportionate effect on insurance pay outs which have increased almost ten fold in real terms since the 1960s and that even small shifts in mean atmospheric temperatures can lead to dramatic increases in the probabilities of exceeding critical threshold values (Pfister et al. 2005). A recent major publication has emphasized the importance of thresholds as triggers for environmental disruption (Shellnhuber et al. 2006). For example, a global temperature rise of 3°C above the pre-industrial baseline would lead to the reversal of the terrestrial carbon sink and possible destabilization of the Antarctic Ice sheets, both of which would be irreversible. In addition, many dangerous climatic events will be associated with increased frequency of extreme events. If the atmospheric CO2 concentration were to rise to 550 p.p.m., it is unlikely that the increase in mean global temperature would stay below 2°C. These increases are all within the limits predicted for the 21st century. There are also first indications that environmental changes are beginning to accelerate away from us. It has already been mentioned that the projected atmospheric CO2 concentrations in 2050 and 2100 presented in Table 2 may be considerable underestimates of the concentrations that will be reached if no effort is made to mitigate CO2 emissions. In addition, it has recently been shown that the wasting of Greenland’s ice sheet has increased by a factor of 2.2 between 1996 and 2005 resulting in an increased rise in sea level (Kerr 2006; Rignot and Kanagaratnam 2006). Anthropogenic ocean acidification resulting from increased atmospheric CO2 concentrations will also have a greater impact than previously thought, particularly at high latitudes (Orr et al. 2005). There is also now clear evidence that the Atlantic meridional circulation at 25°C is slowing (Bryden 2005) and that the Antarctic Bottom Water (AABW) in both the Indian and Pacific Oceans is freshening (Rintoul 2007). Finally, it is anticipated that cooling by atmospheric aerosols will decline in future leading to large increases in global temperatures and possible weakening of the North Atlantic thermohaline circulation (Andreae et al. 2005; Delworth and Dixon 2006). These findings are preliminary warnings that current models may be underestimating the extent and possible effects of future climate change. These new observations are disturbing because it is becoming increasingly obvious that the Kyoto Protocol is ineffective in controlling global atmospheric CO2 emissions (Kintisch and Buckheit 2006). Although the energy intensity (energy consumed per unit of GDP) has decreased substantially since 1990, global CO2 emissions are continuing to rise because of the
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
196
G.P. Glasby
combined effects of the growth of global per capita income and increase in world population (Anon 2007a,b). In addition to emissions from greenhouse gases to the atmosphere from fossil fuels, there are other sources of these gases. For example, the 1997 fires in Indonesia which were associated with the largest El Niño event on record released between 0.81 and 2.57 Gt of C into the atmosphere, of which more than 74% was the result of burning peat and the rest from burning vegetation (Page et al. 2002; Bousquet et al. 2006). The amount of CO2 released was equivalent to an increase in mean annual global carbon emissions of between 13 and 40% for that year. CH4 is 20 times more potent as a greenhouse gas than CO2. Thawing of permafrost along lake margins in the northern Siberia has increased methane emissions in the region by 58% between 1974 and 2000 mainly as a result of ebullition (bubbling) of methane in thaw lakes (Walter et al. 2006). The amount of methane liberated in this region during 2003 and 2004 was about 3.8 Tg yr-1. This compares with a total contribution of CH4 to the atmosphere from geological sources of 45 Tg yr-1 (Kvenvolden and Rogers 2005). The amount of CH4 liberated from this source is therefore relatively minor at present. However, it has been estimated that this region hosts about 500 Gt of C in permafrost which makes it a potentially important source of methane in the future (Walter et al. 2006). For comparison, the methane content of the atmosphere has increased from 0.7 ppm in 1800 to 1.745 ppm in 1998 (with a greenhouse gas equivalent to about 44 ppm of CO2, Kvenvolden and Rogers 2005). Continued global warming will ensure that atmospheric emission of CH4 from this source will continue to increase and that this process will be essentially irreversible. Destabilization of labile oceanic and continental gas hydrates offers the prospects of very much larger emissions of CH4 to the atmosphere in the longer term, particularly when associated with slumping of major offshore structures (Glasby 2003). These examples emphasize the amplification of global warming from sources of greenhouse gases other than fossil fuels. From the previous observations of Diamond (2005), it is clear that there will be no single factor involved in environmental collapse of human societies but a number of factors coming together to produced major impacts locally and regionally which will then knock on to other areas. It seems likely that, once a critical stage is reached, societal collapses will be quite rapid and will be complete within a few decades. This process has been eloquently described by Pierrehumbert (2006) as a catastrophe in slow motion.
Options for the Future As a species, we have come to assume that we are in control of our destiny. However, it should be born in mind that the present phase of our history from 1820 onwards only began 189 years ago. The Roman and British Empires both lasted far longer than this but both came to an end eventually. Collapse of the Roman Empire was followed by the Dark Ages which lasted from about 500 to 1000 AD and were marked by frequent warfare and the virtual disappearance of urban life. In the long sweep of history, mankind has been accustomed to living in an under populated world in which the world’s resources were there for the taking. Our mindset is still not adjusted to thinking in terms of restraint. In considering our future prospect, Morris (1994) has summed up our predicament exactly ‘No matter how extraordinary our achievements may be, we nonetheless remain
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
The Inevitability of Continuing Global Anthropogenic Environmental Degradation
197
animals and subject to all the rules of biology. If we ignore these rules and, for instance, overpopulate and pollute the planet, we will not be protected by some supernatural force. We will become extinct just as easily as any other species’. The projected growth rates in world population, world GDP, total wealth created and atmospheric CO2 concentration in the 21st Century presented here are sufficiently alarming to demand action. Several courses of action may be proposed in an attempt to move towards a more sustainable world. Firstly, it is imperative to reduce the rate of population growth. In order to achieve this, we need to move towards one child families as the norm within the next 15 years. The increase in world population is a major driving force in increasing consumption and in degrading the environment. Secondly, we must reduce consumption in the affluent countries. This means that we should move towards more modest lifestyles. It must be realized that maximizing growth rates by creating demand is self defeating in the end. Finally, major efforts must be made to reduce the emissions of greenhouse gases. I have already dealt with this problem in detail elsewhere (Glasby (2006). However, it is already clear that the costs of mitigation of global climate change will be far less than the costs of coping with the effects of global climate change (Stern 2006). In addition, we should begin the process of moving to world disarmament. This could be achieved by a phased, multilateral reduction in world defence spending over the next 15 years. The money saved should be directed to the alleviation of world poverty and beneficiation of the environment. In 2003, total world defence spending was $910 billion (Anderson 2005). Armed conflict led to the death of up to 187 million people in the 20th Century (McNamara 1995; Rees 2003). This approach to solving the world’s problems must surely be regarded as an anachronism. The aim of these measures is to reduce world population and resource consumption and global CO2 emissions which are the three principal drivers controlling environmental impact “the triple whammy” in order to minimize mankind’s impact on the environment. This can be considered to be a form of enlightened self interest. It is already clear that we will have to devote an increasing proportion of the total wealth created over the course of the 21st Century to the mitigation of the adverse effects of environmental impacts such as floods, sea level rise, hurricanes, droughts, water shortages, soil erosion, loss of biodiversity and pollution and, in particular, to the large-scale sequestration of CO2. This will impose a huge cost penalty on the use of resources which were once supplied free of charge by the natural environment. Whatever is decided, it is clear that we do not have the option of continuing on our present course. In order to achieve these aims, it will be necessary to achieve a global consensus on a scale beyond anything previously attempted.
Conclusions The data presented here show that we have developed an economic system over the past 190 years dominated by the exponential growth of both world population and world GDP. Continuation with these high growth rates for the rest of this century will almost certainly overwhelm the natural environment on which we depend for our sustenance and have a major impact on the human population. In such circumstances, it is not unrealistic to think in terms
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
198
G.P. Glasby
of a sharp decline in world population at some time in the future. In all probability, major environmental impacts will take place rapidly on a human time scale, most likely occurring as a series of shocks. In order to minimize the risks, we must cut back on over population, over consumption and greenhouse gas emissions now. Our goal is not to achieve sustainable development which is no longer possible but to minimize the effects of markedly unsustainable development. The 21st Century must therefore become the century of the environment when we come to terms with the environmental excesses of the industrial revolution and its aftermath, the period of mass consumption in the richer countries and mass poverty in the poorer countries. This means putting the environment at the centre of national and international decision making. In order to achieve these objectives, the human race must achieve a greater unity of purpose than ever before. For this, inspired political leadership will be required. The global banking crisis which took place in 2009 gives us good cause to reconsider our views on wealth creation and care for the environment. Ideally, this will result in the affluent adopting a less consumptive and polluting life style and sharing the Earth’s resources more equitably. Ultimately, there needs to be a change in attitude to nature and the environment from one of gross exploitation for economic development to one of genuine respect, care and compassionate husbandry. However, the relentless growth on world population and inherent demand for more amongst the many gives doubt that this goal can be easily attained. We have now unlocked the secret of increasing wealth on an exponential basis but seem incapable of stopping it. What we need now is an economist of Keynes’ ability to show us how to end this exponential growth without causing massive deflation of the global economy.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Acknowledgments I am indebted to my colleague, Dr Ren Xiangwen, for his considerable assistance in preparing the tables and figures and Professor P. Crutzen, Professor H.D. Schulz, Professor G. Wörner and three anonymous reviewers for helpful discussions.
References Anderson, G. (2005). US defence budget will equal ROW combined “within 12 months”. Jane’s Defence Industry, 4 May. Andreae, M. O., Jones, C. D. & Cox, P. M. (2005). Strong present-day aerosol cooling implies hot future. Nature, 435, 1187-1190 Anon, (2001). Climate Change 2001. The Scientific Basis. Intergovernmental Panel on Climate Change, (www.ipcc.ch). Anon, (2005). Blair makes climate summit call. BBC News (http://www.bbc.co.uk; 1.11.05). Anon, (2007a). Intergovernmental Panel on Climate Change 2007 Working Group III Report on Mitigation of Climate Change (www.ipcc.ch). Anon, (2007b). Gas exchange: CO2 emissions 1990-2006. Nature, 447, 1038. Anon, (2007c). Climate change. Climate research at the Met Office Hadley Centre Informing Government policy into the future.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
The Inevitability of Continuing Global Anthropogenic Environmental Degradation
199
Bousquet, P., Ciais, P. & Miller, J. B. et al. (2006). Contribution of anthropogenic and natural sources to atmospheric methane variability. Nature, 443, 439-443 Bryden, H. L., Longworth, H. R. & Cunningham, S. A. (2005). Slowing of the Atlantic meridional circulation at 25°N. Nature, 438, 655-657. Constanza, R., d'Arge, R., de Groot, R., Farber, S., Grasso, M., Hannon, B., Limburg, K., Naeem, S., O'Neill, R. V., Paruelo, J., Raskin, R. G., Sutton, P. & van den Belt, M. (1997). The value of the world's ecosystem services and natural capital. Nature, 387, 253260 Crutzen, P. J. & Steffen, W. (2003). How long have we been in the Anthropocene Era? Climatic Change, 61, 251-257. Crutzen, P. J., Stoermer, E. F. (2000). The “Anthropocene”. IGBP Newsletter, 41, 12 Daly, H. (1978). Steady-State In: W. H. Economics. & N. Y. Freeman Co, NY DeLong, J. B. (1998). World GDP, One Million B.C. – Present (http://www.j-bradforddelong.net/TCEH/1998_Draft/World_GDP/Estimating_World_GDP.html). Delworth, T. L. & Dixon, K. W. (2006). Have anthropogenic aerosols delayed a greenhousegas induced weakening of the North Atlantic thermohaline circulation? Geophysical Research Letters, 33, L02606. Diamond, J. M. (2005). Collapse How Societies Choose to Fail or Succeed. Viking Penguin, NY, 575. Ehrlich, P. R. & Holdren, J. P. (1971). Impact of population growth. Science, 171, 1212-1217. Ehrlich, P. R. & Ehrlich, A. H. (1990). The Population Explosion. Simon & Schuster, NY, 320. Ehrlich, P. R., Ehrlich, A. H. & Holdren, J. P. (1977). Ecoscience: Population, Resources, Environment. W.H. Freeman & Co., San Francisco. 1051. Etheridge, D. M., Steele, L. P., Langenfelds, R. L., Francey, R. J., Barnolaz, J. M. & Morgan, V. I. (1996). Natural and anthropogenic changes in atmospheric CO2 over the last 1000 years from air in Antarctic ice and firn. J Geophys Res, 101, 4115-4128. Glasby, G. P. (1988). Entropy, pollution and environmental degradation. Ambio, 17, 330-335. Glasby, G. P. (1995). Concept of sustainable development: a meaningful goal? Sci Total Environ, 159, 67-80 Glasby, G. P. (2002). Sustainable development: The need for a new paradigm. Environment, Development and Sustainability, 4, 333-345. Glasby, G. P. (2003). Potential impact on climate of the exploitation of methane hydrate deposits offshore. Marine and Petroleum Geology, 20, 163-175 Glasby, G. P. (2006). Drastic reductions in utilizable fossil fuel reserves: an environmental imperative. Environment, Development and Sustainability, 8, 197-215. Goodland, R., Daly, H., El Serafy, S. & von Drost, B. (eds) (1991). Environmentally sustainable economic development: Building on Brundtland., UNESCO, NY 98. Gorgescu-Roegen, N. (1971). The Entropy Law and the Economic Process. Harvard University Press, Boston, Mass. Goudie, A. (2000). The Human Impact on the Environment, fifth edition. Blackwell, Oxford. Hansen, J., Sato, S., Kharecha, P., Russell, D., Lea, D. W. & Siddall, M. (2007). Climate change and trace gases. Phil Trans R Soc A, 365, 1925-1954. Kerr, R. A. (2006). A worrying trend of less ice, higher seas. Science, 311, 1698, 1699, 1700.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
200
G.P. Glasby
Keynes, J. M. (1931). Economic possibilities for our grandchildren. In: The Collected Writings of John Maynard Keynes, Vol. IX Essays in Persuasion, 321-332 (1972 edition, Macmillan, London). Keys, D. (2000). Catastrophe An Investigation into the Origins of the Modern World. Arrow Books, London., 509. Kintisch, E. & Buckheit, K. (2006). Along the Road from Kyoto Global greenhouse gas emissions keep rising. Science, 311, 1702-1703. Kvenvolden, K. A. & Rogers, B. W. (2005). Gaia’s breath—global methane exhalations. Marine and Petroleum Geology, 22, 579-590. Lenton, T. M., Held, H., Kriegler, E., Hall, J. W., Luch, W., Rahmstorf, S., Schellnhuber, H. J. (2008). Tipping elements in the Earth’s climate system. Proc Nat Acad Sci, 105, 17861793. Lutz, W., Sanderson, W. & Scherbov, S. (2001). The end of world population growth. Nature, 412, 543-545. Lovelock, J. E. (2006). The Revenge of Gaia: Why the Earth is Fighting Back – and How We Can Still Save Humanity. Allen Lane, London. 177. McNamara, R. S. (1995). In Retrospect the Tragedy and Lessons of Vietnam. Times Books Random House, NY, 414. Lynas, M. (2008). The climate change clock is ticking. The Guardian 1 August, 2008. Meinshausen, M., Meinshausen, N., Hare, W., Raper, S. C. B., Frieler, K., Knutti, R., Frame, D. J. & Allen, M. R. (2009). Greenhouse-gas emission targets for limiting global warming to 2 °C. Nature, 458, 1158-1162. Morris, D. (1994). The human animal. Crown Publishers, Inc, NY, 224. Myers, N. (2002). Environmental refugees: a growing phenomenon of the 21st century. Phil Trans R Soc Lond B, 357, 609-613. National Research Council (1999). Our Common Journey: A Transition Toward Sustainability. National Academy Press, Washington, DC 384. Orr, J. G., Fabry, V. J. & Aumont, O., Bopp, L. & others, (2005). Anthropogenic ocean acidification over the twenty-first century and its impact on calcifying organisms. Nature, 437, 681-686. Page, S., Siegert, F., Rieley J. O., Boehm, H. J., Jaya, A., Limin, S. (2002). The amount of carbon released from peat and forest fires in Indonesia during 1997. Nature, 420, 61-65. Pfister, C., Schellnuber, H. J., Ramstorf, S. & Graßl, H. (2005). Weather catastrophes and climate change Is there still hope for us? Munich Re, Munich., 264. Philipona, R., Durr, B., Ohmura, A. & Ruckstuhl, C. (2005). Anthropogenic greenhouse forcing and strong water vapor feedback increase temperature in Europe. Geophys Res. Lett., 32, L19809. Pierrehumbert, R. T. (2006). Climate change: A catastrophe in slow motion. Chicago Jl Internat Law, 6(2), 1-24. Raply, C. G. (2005). Arctic ice sheet and sea level rise. Proc. Int. Symp. on Stabilisation of Greenhouse Gas Concentrations, Exeter UK, Hadley Centre Met Office. Rees, M. (2003). Our Final Century A Scientist’s Warning: How Terror, Error, and Environmental Disaster Threaten Humankind’s future in This Century—On Earth and Beyond. William Heinemann, London., 228. Rees, M. (2006). The G8 On Energy: Too Little. Science, 313, 59.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
The Inevitability of Continuing Global Anthropogenic Environmental Degradation
201
Rignot, E. & Kanagaratnam, P. (2006). Changes in the velocity structure of the Greenland ice sheet. Science, 311, 986-990 Rintoul, S. R. (2007). Rapid freshening of Antarctic Bottom Water formed in the Indian and Pacific oceans. Geophy Res Letts, 54, L06606, doi:10.1029/2006GL028550 Ruddiman, W. F. (2003). The anthropogenic greenhouse era began thousands of years ago. Climate Change, 61, 261-293 Ruddiman, W. F. (2005). Plows, Plagues and Petroleum: How Humans took control of Climate. Princeton University Press, Princeton, 272. Secretariat of the Convention on Biological Diversity (2006). Global Diversity Outlook 2. Montreal, vii + 81. Shellnhuber, H. J., Cramer, W., Nakicenovic, N., Wigley, T. & Yohe, G. (2006). Avoiding Dangerous Climate Change. Cambridge University Press, Cambridge, 392. Simon, J. L. (1998). The Ultimate Resource 2. Princeton University Press, Princeton, 778. Stern, N. (2006). Stern Review: The Economics of Climate Change. HM Treasury, London. 575, + technical annexes. Strand, J. (2002). Environmental Kuznets curves: empirical relationships between environmental quality and economic development. Department of Economics University of Oslo Memorandum No 4/2002. 24 pp (http://www.oekonomi.uio.no) Tans, P. P. (2007). Trends in atmospheric carbon dioxide Recent monthly mean CO2 at Mauna Loa. NOAA Global Monitoring Division (http://www.esrl.noaa.gov/ccgg/trends/) Tans, P. P. & Conway, T. J. (2007). Monthly atmospheric CO2 mixing ratios from the NOAA Carbon Cycle Coperative Global Air Sampling Network, 1968-2002 (http://cdiac.ornl.gov/trends/co2/contents.htm). Thordarson Th, Self S 1993. The Laki (Skaftár Fires) and Grimsvötn eruptions in 1783-1785. Bulletin of Volcanology, 55, 233-263. Walter, K. M., Zimov, S. A., Chanton, J. P., Verbyla, D. & Chapin, F. S. (2006). Methane bubbling from Siberian thaw lakes as a positive feedback to climate warming. Nature, 443, 71-75 World Commission on Environment and Sustainability (1987). Our Common Future. Oxford University Press, Oxford, 400.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
In: Advances in Energy Research, Volume 1 Editor: Morena J. Acosta, pp. 203-224
ISBN: 978-1-61668-994-0 © 2010 Nova Science Publishers, Inc.
Chapter 6
GENERAL OVERVIEW FOR WORLDWIDE TREND OF FOSSIL FUELS 1
Erkan Topal1 and Shahriar Shafiee2
Mining Engineering Department, Western Australia School of Mines, Curtin University of Technology, Kalgoorlie, WA, 6433, Australia. 2 School of Engineering and CRC Mining, The University of Queensland, St. Lucia Qld. 4072, Australia
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Abstract Crude oil, coal and gas, known as fossil fuels, are the main sources of world energy supply. Even though worldwide research has been conducted into other renewable energy resources to replace fossil fuels, the global energy market will continue to depend on fossil fuels, which are expected to satisfy approximately 84% of energy demand in 2030. Views about the reserves of fossil fuels differ. To date there is no scientific consensus on when non-renewable energy will be exhausted. Based on available reserve data and methods, coal will be the only remaining fossil fuel after 2042 and will be available until 2112. The world reserve of fossil fuels mainly depends its consumption and prices. The trend of fossil fuel consumption over the last couple of decades has shown an upward tendency and it is expected to continue until at least 2030. Current predictions indicate that oil will be the main fuel supply of energy until 2030 with a decline in consumption followed by coal and gas. While nominal prices for fossil fuels have followed an escalating trend, real prices have individually fluctuated. Forecasting future fossil fuel prices is uncertain because it is difficult to consider all the significant variables as well as the political implication in a price forecasting models. This chapter individually reviews reserves, demand, supply, and prices of fossil fuels. Subsequently it predicts and comments on the future expectations for fossil fuels as a main source of world energy supply by considering its expected reserves, prices and environmental barriers for their usage.
Keywords: World fossil fuel: consumption, reserves, prices, future trend for non-renewable resources.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
204
Erkan Topal and Shahriar Shafiee
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Introduction Oil, coal, and gas are non-renewable energy sources known as fossil fuels. They play a dominant role in the world energy market. Today the energy market is worth approximately 1.5 trillion US dollars and is led by fossil fuels [1]. The World Energy Outlook (WEO) 2008 claims that energy demand will expand by 45% between now and 2030 increasing at 1.6% annually. Energy generated from fossil fuels will remain the major source of energy and is still expected to meet more than 80% of energy demand by 2030. During the last seven years (2000-2007) coal demand grew faster than any other energy source and it is expected to account for more than a third of incremental global energy demand by 2030 [2]. In terms of global consumption, crude oil remains the most important primary fuel accounting for 36.4% of the world’s primary energy consumption (without biomass) [3]. The International Energy Agency (IEA) claims that oil demand will fall from 35% to 32% of total energy demand by 2030. In other words, in comparison with other types of fossil fuels, the demand for oil will fall over the next 25 years, even though in absolute terms it will continue to be the main fuel supplying the universal demand for energy until 2030. Consumption of natural gas is expected to rise, even if its usage does not increase as quickly as other fuel sources [4]. The main question to sustain future energy demand is: Are there sufficient resources of fossil fuel to satisfy future energy consumption? Views about world fossil fuel reserves differ and it is difficult to predict exactly when supplies of fossil fuels will be exhausted. However, according to the World Coal Institute [5], at current production levels, proven oil, coal and gas reserves are estimated to last another 41, 155 and 65 years respectively [6]. Likewise, it has the fossil fuel reserve depletion times for oil, coal and gas at approximately 35, 107, and 37 years based on a continuous compounding rate. This means that coal reserves are available up to 2112 and will be the only fossil fuel remaining after 2042. Proven fossil fuel reserves will fluctuate according to economic conditions, especially fossil fuel prices. In other words, proven reserves will shrink when prices are too low for fossil fuels to be recovered economically, or alternatively they will expand when fossil fuel prices increase. The trend of fossil fuel prices significantly affects fossil fuel consumption, international inflation, global GDP growth, etc. Fossil fuel price prediction is a dilemma. There is a big difference between predictions of fossil fuel prices over the last couple of decades and actual prices. The price of fossil fuels is dependent on many unpredictable variables. To forecast a reasonable range of possible future fossil fuel prices and demands, it is necessary to have some prediction of fossil fuel consumption and future market needs. When reviewing world fossil fuel prices over the last few years, it can be seen that they have risen as a result of unexpected events such as terrorist attacks and the wars in Iraq and Afghanistan. Furthermore, oil, gas and coal prices demonstrated the highest volatility of all fossil fuel prices during 2008. In the first part of 2008, the oil price reached over $140 per barrel. It then plunged, on a monthly average, by more than 57% in last six months from $105 to $45 per barrel. On a daily basis, the fluctuation moved from $145 to $40 per barrel. Coal and gas have followed a similar trend by plunging on a monthly average by more than 60% and 50% in last six month of 2008. There is no doubt that fossil fuels will continue to account for the largest portion of world energy supply for the next couple of decades. There are two main challenges regarding the success of securing the sustainable usage of fossil fuels as a major source of energy. The first
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
General Overview for Worldwide Trend of Fossil Fuels
205
is security of energy supply or whether there will be enough energy available in a way that it is reliable and affordable. To be able to secure energy supply [2] estimates of the required cumulative investment are 26 trillion (in 2007 dollars) between now and until 2030. The second is environmental protection or the utilisation of these resources with respect to green house gas emissions and world climate change. There are few available options to meet future energy needs as well as emission targets. The first available option is to continue to use fossil fuels with CO2 Capture and Storage (CCS). There are promising industrial scale projects and experiments that have been developed but the scale of these projects, at around a couple million tonnes, still needs further cost reduction in the future. The second option is to use renewable and nuclear energy more intensively. Clean energy options will be the major player for the future energy needs. Clean energy projects such as wind, solar, hydro and geothermal will be major attraction to satisfy future energy needs. Currently, 20% of world's electricity comes from renewable energy. High fossil fuel prices, concern about climate change and government subsidies has recently put nuclear power as another possible important source of energy. Currently, it provides around 2% of the world’s energy and 15% of the world's electricity. Nuclear power still needs public acceptance in regards to safety and waste disposal in order to be a strong candidate for the future energy supply. Even though the use of renewable and nuclear energy can be seen as the most effective and preferable alternatives for the future energy needs, they will only be able to supply 20% of world’s energy needs by 2030 [2]. The last option is to increase energy-use efficiency to reduce consumption. Fluorescent bulbs, hybrid and electrical cars, and "Green Building" are the good examples that can increase energy efficiency. Fluorescent bulb uses 75% less energy and lasts around 10 times longer than an incandescent bulb [7]. Hybrid and electric cars may emit only about 30% CO2 when compared to gasoline cars, but the power is generated by power plant. "Green Buildings", minimize the use of energy as well as other environmental impacts and have attracted enormous attention in recent years. As discussed further in each of these options, there is no silver bullet that can meet those challenges. The solution for future energy sustainability is a combination of alternatives, such as using more energy efficient products, more fossil fuels with carbon capture, more nuclear energy and more renewable sources. The organization of the chapter as follows. The next section presents trend of reserves, demand, supply and prices of fossil fuels. Following section predicts and comments on future expectations and trends of world energy supply with available options and solutions.
Trend of Fossil Fuel Reserve Predicting the world’s fossil fuel reserves is ambiguous and nobody can forecast exactly when supplies of fossil fuels will be diminished. Table 1 illustrates the reserve of fossil fuels in the world in giga tonnes of oil equivalent. Two important features can be observed from this table. First, coal has the largest worldwide reserves. Around 65% of fossil fuel reserves in the world are coal and the remaining 35 % are oil and gas. It is clear that coal will supply more energy than oil and gas in the future. On the other hand, world coal resource assessments have been downgraded continuously from 1980 to 2005 by an overall 50%. Thus in practice, resources have never been reclassified into reserves for more than two decades despite the increase in coal prices [8]. Even though coal reserve data is biased and has shrunk
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
206
Erkan Topal and Shahriar Shafiee
over the last 25 years, coal reserves are still the biggest of all fossil fuels. Second, while the size and location of reserves of oil and gas abound, coal remains abundant and broadly distributed around the world. "Economically recoverable reserves of coal are available in more than 70 countries worldwide and in each major world region" [9]. In other words, coal reserves are globally distributed and not limited to mainly one location like oil and gas. One of the interesting facts about fossil fuels is that despite the rise in consumption, the quantities of proven reserves have also risen with time. According to WCI [11], the ratio of resources to production of fossil fuels has remained nearly constant for decades at around 40, 60 and 150 for oil, gas and coal, respectively. However, if gas and oil reserves are exhausted then more coal will need to be used to substitute energy sources. In this case, coal reserves would not last more than 80 years with today's trends. Consequently, fossil fuel reserves are running out, and the supply of fossil fuels is inelastic. Table 1. Location of the World’s Main Fossil Fuel Reserves in 2006 [10] Region
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
North America South America Europe Africa Russia Middle East India China Australia and East Asia Total
Fossil Fuel Reserve (Giga tonnes oil equivalent) Oil Coal Gas Sum 8 170 7 185 15 13 6 34 2 40 5 47 16 34 13 63 18 152 52 222 101 0 66 167 1 62 1 64 2 76 2 80 2 60 10 72 165
607
162
934
Oil 0.86 1.61 0.21 1.71 1.93 10.81 0.11 0.21 0.21 17.67
Fossil Fuel Reserve (percent) Coal Gas 18.2 0.75 1.39 0.64 4.28 0.54 3.64 1.39 16.27 5.57 0 7.07 6.64 0.11 8.14 0.21 6.42 1.07 64.99
17.34
Sum 19.81 3.64 5.03 6.75 23.77 17.88 6.85 8.57 7.71 100
Fossil fuel consumption and price are the two main important parameters which affect the trend of fossil fuel reserves. The following section discuses these two variables individually. Figure 1 depicts the trends of oil consumption and reserves, namely, oil reserves and consumption have increased and shown an unusually strong positive correlation (95%) over the time. In other words, as the oil consumption has increased from 22.53 Billion Barrels in 1980 to 32 Billion Barrels in 2009, oil reserves have also increased from 645 to 1,342 Billion Barrels in the same period. However, most of the increase in reserves has not come from new discoveries but from revisions made in the 1980s in OPEC countries. The total recoverable conventional oil resources including initial proven and probable reserves from discovered fields and oil that has yet to be found are estimated at 3.5 trillion barrels. Only 1.1 trillion barrels of this total has been extracted. Undiscovered resources account for about 30% of the remaining recoverable oil which are expected to be located in the Middle East, Russia and the Caspian region. Non conventional oil resources, oil sands and extra heavy oil account around another 3 trillion barrel of oil. These resources are located mainly in the Alberta province in Canada and in the Orinico Belt in Venezuela. Fatih Birol, chief economist of the IEA, in 2008 mentioned that the active oil field deposits will decline in the future and we have to find new deposits with 45 million barrels per day production capacity (four times the current capacity of Saudi Arabia) to meet the constant consumption target until 2030. If we want to meet the
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
General Overview for Worldwide Trend of Fossil Fuels
207
growth in the demand, we need to find new deposits with 64 million per day production capacity (six times the current capacity of Saudi Arabia). 1,600
40
World Consumption
1,400
35
1,200
30
1,000
25
800
20
600
15
400
10
200
5
0
Billion Barrels (Consumption)
Billion Barrels (Reserve)
Oil Reserve
0 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
Time
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 1. Trends of world crude oil proven reserves and oil consumption from 1980 to 2009, Data collected from EIA and BP.
Figure 2 shows coal reserves and its consumption from 1987 to 2007. As it can be seen, the trend of coal reserves has an expected negative relationship with consumption, however, the gap between the two trends is unusual over time. In other words, coal in the last 20 years has been consumed at around 100 Billion tonnes while coal reserves have dramatically decreased by 750 Billion tonnes (even with a steady coal price) over the last 20 years. This indicates that coal reserve estimations are not reliable and reflect the real values. World coal resource assessments have been downgraded continuously from 1980 to 2005 by an overall 50%. Thus in practice, resources have never been reclassified into reserves for more than two decades despite the increase in coal prices [8]. Despite the fact that the data for coal is less available and more volatile in comparison to oil and gas, the relationship between coal reserves and coal consumption is still negative and significant (with -47% correlation). Figure 3 shows the trend of gas reserves and consumption from 1980 to 2009. This follows a similar trend to oil reserves and consumption. Gas consumption increased from 52 Trillion Cubic Feet in 1980 to 106 Trillion Cubic Feet in 2009. Gas reserves also increased from 2,573 Trillion Cubic Feet in 1980 to 6,253 Trillion Cubic Feet in 2009. Like oil resources, natural gas resources are highly concentrated in a small number of countries and regions. Russia, Iran and Qatar 56 % of world’s reserves [2]. The correlation for these two gas variables can be computed as 98% which shows a very strong and unusual positive relationship. Other important variables that affect the size of fossil fuel reserves are fossil fuel prices. The available reserve will tend to shrink when prices are low thus making some reserves uneconomical to recover and will expand when the prices are high, that is, turning some of the available resources into reserves. Figure 4 depicts oil reserves versus oil price from 1980
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
208
Erkan Topal and Shahriar Shafiee
to 2009 (average oil prices are yearly based but for 2009 has been calculated based on first 5 months). Oil reserves along with its price have increased over the past 29 years and display a weak positive correlation (57%).
1750
7 World Consumption
1500
6
1250
5
1000
4
750
3
500
2
250
1
0
Billion tonnes (Consumption)
Billion tonnes (Reserve)
Coal Reserve
0 1987
1990
1993
1996
1999
2002
2005
2007
Time
Figure 2. Trends of world coal proven reserves and coal consumption from 1987 to 2007, Data Collected from EIA and BP. 7,000
140
World Consumption
6,000
120
5,000
100
4,000
80
3,000
60
2,000
40
1,000
20
0
Trillion Cubic Feet (Consumption)
Trillion Cubic Feet (Reserve)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Gas Reserve
0 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
Time
Figure 3. Trends of world natural gas proven reserves and gas consumption from 1980 to 2009; Data collected from EIA and BP.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
General Overview for Worldwide Trend of Fossil Fuels
209
105
1400
Price
1200
90
1000
75
800
60
600
45
400
30
200
15
Dollars per Barrel
Billion Barrels (Reserve)
World Reserve
0
0 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
Time
Figure 4. Trend of world crude oil proven reserves and oil price from 1980 to 2009, Data collected from EIA and BP. 36
1800
Price
1600
32
1400
28
1200
24
1000
20
800
16
600
12
400
8
200
4
Dollars per Short Tons
Billion tonnes (Reserve)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
World Reserve
0
0 1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
Time
Figure 5. Trend of world coal proven reserves and coal price from 1987 to 2007, Data collected from EIA and BP.
Figure 5 demonstrates that from 1987 to 2007 coal reserves to coal price did not have any significant correlation (7%). Figure 6 shows the trend of gas reserves with its consumption from 1980 to 2009. This figure follows a similar trend as oil between reserves and price. The correlation for these two gas variables is 79% which shows a very strong positive relationship.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
210
Erkan Topal and Shahriar Shafiee
20
8000
Price
7000
17.5
6000
15
5000
12.5
4000
10
3000
7.5
2000
5
1000
2.5
Dollars per Thousand Cubic Feet
Trillion Cubic Feet (Reserve)
World Reserve
0
0 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
Time
Figure 6. Trend of world natural gas proven reserves and gas price from 1980 to 2009, Data collected from EIA and BP.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Trend of Fossil Fuel Production and Consumption The Energy Information Administration (EIA) has forecasted that energy consumption will increase at an average rate of 1.1% per annum from 500 quadrillion Btu in 2006 to 701.6 quadrillion Btu in 2030 [12]. China and India will command more than 50% of the increase in world energy demand between 2006 to 2030. On the other hand many believe that the implementation of the Kyoto commitments will shrink worldwide fossil fuel consumption, fossil fuel will continue to play a major role in world energy supply for many decades to come. As WEO 2008 asserted, around 80% of energy up to 2030 will come from fossil fuels [2]. The International Energy Agency (IEA) claims that the demand for oil, the single largest consumable fossil fuel in the global energy market, will decrease from 35% to 32% by 2030. Global primary demand for oil (without biofuels) will rise 1% on average every year from 85 million barrel per day in 2007 to 106 million barrel per day in 2030. Based on WEO 2008, all the growth in oil demand will come from non-OECD countries. It is expected that China, India and the Middle East will contribute 43%, 20%, and 20% respectively. WEO 2008 claims that the world’s oil reserves are large enough to meet the demand beyond 2030. The available reserves has been doubled since 1980 and provides enough to supply world demand for at least 40 years at the current rate of consumptions [2]. Coal has the largest worldwide reserves and resources, compared with oil and gas (two third of fossil fuel). According to the EIA, coal accounted to about 27% of the world energy consumption in 2007. According to WEO 2007, 2008, world demand for coal will increase on average by 2% per year, its share in global energy demand climbing from 26% in 2006 to 29% in 2030. During the last seven years (2000-2007) coal demand grew faster than any other energy source and it is expected to account for more than a third of incremental global energy
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
General Overview for Worldwide Trend of Fossil Fuels
211
demand to 2030 (IEA, 2007b). The WEO 2007 states that ‘‘coal is seen to have the biggest increase in demand in absolute terms, jumping by 73% between 2005 and 2030’’. Roughly two-thirds of the coal produced was used for electricity generation and about 23% for industrial purposes. Therefore it is expected that coal will supply more energy than oil and gas in the future [2],[12]. Global demand for natural gas grows around 1.8% per year and its share in total energy demand accounts for around 22%. During the last seven years (2000-2007) oil, coal and gas annual demand have been growing 1.8%, 4.8% and 2.6% respectively. Gas demand growth is second largest after coal [2]. AEO 2008 has forecasted that "natural gas consumption increases from 21.7 trillion cubic feet in 2006 to 23.8 trillion cubic feet 2016, then declines 22.7 trillion cubic feet in 2030" [13]. Figure 7 illustrates the trend of fossil fuel consumption worldwide from 1965 to 2030. The three types of fossil fuel are expected to show an increasing trend over the next 22 years. World oil consumption is always greater than that of coal and gas. Likewise coal consumption is consistently greater than gas consumption [14].
250 World Coal
World Liquid fuels
World Natural Gas History
Projections
150
2009
100
50
2030
2025
2020
2015
2010
2005
2000
1995
1990
1985
1980
1975
1970
0 1965
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Quadrillion Btu
200
Time
Figure 7. Consumption of fossil fuel worldwide from 1965 to 2030. Data collected from EIA and BP.
Energy demand for a given country is mainly dependent on its GDP. Figure 8 illustrates the worldwide proportion of energy consumption and GDP for selected countries in 2007. As it can be seen from the figure, the larger the country's GDP, the larger its energy consumption. For example, in 2007 these selected countries produced 66% of the world GDP and consumed 67.5% of the world’s energy. The dominant country in this selection was the US, holding 25% of the world GDP and consuming 21.6 % of world energy. Energy consumption and GDPs for some countries are parallel such as the US, Australia, Canada and South Korea. Some countries, however, like China, Russia and Saudi Arabia consume considerably more energy when compared to their GDP percentage. In contrast, Japan, Germany, United Kingdom and France consume significantly less than their GDP percentage.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
212
Erkan Topal and Shahriar Shafiee 35 Energy Consumption GDP 30
Percentage
25
20
15
10
5
0 US
China
Japan
India
Germany
Russian
Australia
Canada
South Korea
France
United Kingdom
Saudi Arabia
Rest of the world
Figure 8. Worldwide proportion of energy consumption and GDP for selected countries in 2007. Data collected from BP, WDI.
5 Energy use growth (kg of oil equivalent per capita)
GDP per capita growth (annual %)
4
3
2
Percent
1
2007
2006
2005
2004
2003
2002
2001
2000
1999
1998
1997
1996
1995
1994
1993
1992
1991
1990
1989
1988
1987
1986
1985
1984
1983
1982
1981
1980
1979
1978
1977
1976
1975
1974
1973
1972
1971
1970
1969
1968
1967
1966
1965
1964
1963
1962
1961
0 1960
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
According to the International Energy Agency [15], from 2005 to 2030 the demand for oil will grow by 1.3% per annum, roughly in line with global GDP averaging 1.7% from 2005 to 2015 and 1.1% from 2015 to 2030. Figure 9 shows the world growth in total energy use per capita and GDP per capita between 1960 and 2007. These two series show a strongly relationship with a correlation coefficient of about 0.82.
-1
-2
-3
Time
Figure 9. The world growth of total energy use per capita and GDP per capita from 1960to 2007. Data collected from WDI. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
General Overview for Worldwide Trend of Fossil Fuels
213
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Trend of Fossil Fuel Prices There is a big difference between past predictions of fossil fuel prices over the last couple of decades and their actual prices. The price of fossil fuels is dependent on many unpredictable variables. Considering all of the significant variables as well as the political implications make it difficult to forecast the price of fossil fuels. Furthermore, Bachmeier,et al. 2006, claims that oil, coal and gas markets are only very weakly integrated. Their results indicated that there is not one primary energy market for fossil fuels [16]. Of all the types of fossil fuel price forecasting, oil price prediction is the most volatile and is highlighted more than others. For example, some studies indicate that black gold remains bleak only in the short-term and in long-term crude oil prices will soar even to quintuple over the next few years because of dwindling supplies, resurgent demand and a lack of investment of oil. [17,18]. On the other hand, other researchers predict that the jump and dip in crude oil prices are just in the short-term and oil prices will revert to their long-term trend. For instance, the International Energy Agency (IEA) predicts that oil will rise around $100 to ultimately $200 a barrel by 2015 (IEA 2008). Additionally, Wall Street, Deutsche Bank AG (DB), Merrill Lynch and Co. Inc (MER) and Goldman Sachs Group Inc. (GS) individually predicted oil prices of approximately $50 in 2009. Table 2 presents contradictory oil price predictions of different sources from 2010 to 2030. The same contradictions can be seen for the future prediction of coal and gas prices. World Energy Outlook (WEO) 2006 predicts that fossil fuel prices will be stable for the longterm and they were unable to predict any jump in coal and gas prices in 2008 [4]. On the other hand, WEO 2008 predicted that coal prices will jump and settle at around $120 per tonne in real terms in 2010, then coal prices will remain flat through to 2015 and fall back slightly to $110 in 2030. In contrast, the EIA forecasted the price of coal to decrease by only about US$20 per short tonne by 2021 and increase only around US$20.63 per short tonne in 2025 [10]). Moreover, WEO 2008 predicted gas prices to jump in 2008, then fall back slightly through to 2010 and begin to rise after 2015 in line with oil prices [2]. Furthermore, there has always been robust discussion about the historical trends of natural resource commodity prices, and whether they are deterministic or stochastic (for more information refer to [20,21,22,23,24,25,26,27,28]. Price has two different definitions – nominal and real terms. Nominal price refers to any value expressed in current terms, as opposed to real price, which adjusts for the effect of inflation. The real and nominal prices of fossil fuels are different. Normally the nominal price tends to increase while the real price of major fossil fuels decreases. This section reviews the long-term trend of real and nominal prices of fossil fuels. Figure 10 shows the historical nominal prices for fossil fuels from 1950 to 2008. As it can be seen from this Figure, oil, gas and coal prices have fluctuated similarly from 1950 to 2000. After 2000 the oil and gas price vacillation moves differently than that of coal price movements. Moreover, this Figure shows oil and gas prices significantly soared from 2000 to 2008. In that period, coal prices steadily increased as well but not to the same degree as oil and gas prices. More specifically, oil and gas prices rose more than four times in less than nine years, while the coal price only doubled in that period. However, fossil fuel prices have dropped down dramatically during the first six months of 2009. As illustrated in Figure 4 and 6, in first 6 months of 2009 average oil and gas prices declined by more than 50% and coal followed a similar trend. Consequently, the souring price of fossil fuels in the last decade has been offset in first 6 months of 2009.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
214
Erkan Topal and Shahriar Shafiee Table 2. Projection of world oil prices, 2010-2030 (2007 dollar per barrel) Reference: [18] Projection AEO2008 (reference case) AEO2008 (high price case) AEO2008 (reference case) DB IHSGI IEA (refrence) IER EVA SEER
2010 75.97 81.08 80.16 47.43 101.99 100.00 65.24 57.09 54.82
2015 61.41 92.77 110.46 72.20 97.60 100.00 67.03 74.61 98.40
2020 61.26 104.74 115.45 66.09 75.18 110.00 70.21 95.33 89.88
2025 66.17 112.10 121.94 68.27 71.33 116.00 72.37 105.25 82.10
2030 72.99 121.75 130.43 70.31 68.14 122.00 74.61 116.21 75.00
DB: Deutsche Bank AG; IHSGI: IHS Global Insight; IEA: International Energy Agency; IER: Institute of Energy Economics and the Rational Use of Energy at the University of Stuttgart; EVA: Energy Ventures Analysis, Inc.; SEER: Strategic Energy and Economic Research, Inc.
100
10
2008
2006
2004
2002
2000
1998
1996
1994
1992
1990
1988
1986
1984
1982
1980
0 1978
1
0 1976
10
1974
2
1972
20
1970
3
1968
4
30
1966
40
1964
5
1962
6
50
1960
60
1958
7
1956
70
1954
8
1952
9
80
Gas Price (US $/Thousand Cubic Feet)
Oil Price
90
1950
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Coal Price (US $/Short Ton), Oil Price (US $/Barrel)
Coal Price Gas Price
Year
Figure 10. The average yearly historical trend of nominal fossil fuel prices from 1950 to 2008. Data collected from [2].
Figure 11 depicts the average yearly historical trend of oil, gas and coal real prices from 1950 to 2008. This Figure proves that the real oil and gas prices in 2008 reached maximum prices in that era. Nevertheless, the real coal price in 1976 was two times greater than the 2008 price. These figures indicate that oil and gas prices move together while and coal prices change with a lag. Table 3 computes the correlations between oil, gas and coal prices in nominal and real terms. This table shows a positive and significant correlation between the price of oil and gas at around 0.95 and 0.86 for nominal and real terms respectively. This positive and strong correlation can be seen in Figures 10 and 11 as well. Conversely, the relationships between oil and coal prices weaken to 0.74 in normal and 0.27 in real terms. This amount becomes even more fragile between gas and coal prices at 0.71 and 0.02 for nominal and real terms respectively. The correlation of 0.02 has shown that coal and gas real price movements are independent of each other. Applying a one or two year lag effect on nominal coal versus oil
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
General Overview for Worldwide Trend of Fossil Fuels
215
price increase the correlation to 4 % and 5 % respectively. On the other hand lag effect does not change the correlation values for nominal coal versus gas price. Consequently, fossil fuel price movements show significant correlation over the last sixty years between nominal and real terms of oil, gas and coal. At the same time, coal price movements had a weak relationship with oil and gas prices.
10
Oil Price
2008
2006
2004
2002
2000
1998
1996
1994
1992
1990
1988
1986
1984
1982
1980
1978
0 1976
0 1974
1
1972
2
10
1970
3
20
1968
30
1966
4
1964
5
40
1962
6
50
1960
60
1958
7
1956
8
70
1954
80
1952
9
Gas Price (US $/Thousand Cubic Feet)
Coal Price Gas Price
90
1950
Coal Price (US $/Short Ton), Oil Price (US $/Barrel)
100
Year
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 11. The average yearly historical trend of real fossil fuel prices from 1950 to 2008. Data collected from [2].
Oil, gas and coal value movements can be measured by generating an index for fossil fuel prices, which is independent of real or nominal terms. This index has been computed by using a proportion of oil prices versus other types of fossil fuel. Figure 12 draws the ratio of one barrel oil price over one thousand cubic feet of gas prices and one short ton coal prices from 1950 to 2008. As can be seen in Figure 12, this ratio verifies that one barrel of oil was equivalent to 0.5 short ton coal in 1950 and reached more than about 2.5 short ton in 2008. The increasing ratio of more than five times confirms that coal price in comparison to oil price get cheaper from 1950 to 2008. In contrast, the ratio of oil to gas price has shown that one barrel of oil was equivalent to 36 thousand cubic feet gas in 1950 and then decreased to approximately 11 thousand cubic feet gas in 2008. In other words, the gas price in comparison to oil price has increased. If these two ratios are compared together, we will find that coal price versus gas price gets relatively lower than oil price. Consequently, this index interprets that coal price lost their value and gas price increased their value across all types of fossil fuel prices from 1950 to 2008. Table 3. Correlation between real and nominal fossil fuel prices Correlation Nominal Real
Oil and Gas 0.95 0.86
Fossil Fuel Prices Oil and Gas and Coal Coal 0.74 0.71 0.27 0.02
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
216
Erkan Topal and Shahriar Shafiee
3
60 Oil on Gas
2.5
50
2
40
1.5
30
1
20
0.5
10
2008
2006
2004
2002
2000
1998
1996
1994
1992
1990
1988
1986
1984
1982
1980
1978
1976
1974
1972
1970
1968
1966
1964
1962
1960
1958
1956
1954
1952
0 1950
0
The ratio of oil price on gas price
The ratio of oil price on coal price
Oil on Coal
Year
Figure 12. The average yearly historical ratio of the one barrel oil price to one thousand cubic feet gas price and one barrel oil price to one short ton coal price from 1950 to 2008. Data collected from [2].
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Future Expectations and Trends for Fossil Fuels There is no doubt that fossil fuels will continue to account for the largest portion of world energy supply for the next four decades. There are two main challenges that we will face in the future in order to secure regarding the sustainable usage of fossil fuels as a major source of energy. The first is securing energy supply, or how to get these available sources of energy from places of production to the places of need in a reliable and affordable manner. The second issue is environmental protection, or how to manage the utilisation of these resources with respect to green house gas emissions and to address world climate change by transforming these available resources into environmentally benign systems of energy supply. In order to make energy services available to consumers, major investments are needed. WEO (2008) estimates a required cumulative investment of 26 trillion (in 2007 dollars) between 2007- 2030. Around 52% of this investment will be for the power sector while the remaining will predominantly be for oil and gas exploration and development. This is especially true for non-OECD countries. Over 50% of projected global energy investment goes to maintaining the current level of supply capacity. Most of the current infrastructures used to provide oil, gas, coal and electricity will need to be replaced by 2030 [2]. Mandil (2008) mentions that we are not on track to meet the 26 trillion dollars target due to three major reasons. First, increasing political and regulatory uncertainty will make investments riskier and require higher returns for investors which will reduce the number of selected projects. Second, while societies enjoy full scale energy services to be provided them, they do not want the related infrastructure to be close to their home (Not in My Back Yard). Third, an increasing number of countries only allow for national investors in their energy projects. Furthermore, extraction of fossil fuels is becoming more expensive as the shallower fuel deposits become depleted. The recent global financial crisis will affect fossil fuel investments as the crisis last longer and more projects will be delayed. Companies will react to the lack of
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
General Overview for Worldwide Trend of Fossil Fuels
217
available capital, higher financial cost and lower return on investment from lower energy prices [28]. While fossil fuels will meet world energy needs for several decades, it will face economical, environmental and social challenges. World energy sources need to handle their carbon footprint in order to prevent irreversible damage to the world’s climate. When fossil fuels are burned, one of the most significant environmentally harmful gases emitted is carbon dioxide, a gas that traps heat in the earth's atmosphere. Burning fossil fuels has generated more than a 25 percent increase in the amount of carbon dioxide in our atmosphere over the last 150 years. Based on Energy Information Administration (EIA) burning fossil fuels produces around 21.3 billion tons of carbon dioxide per year, but it is estimated that natural processes can only absorb about half of that amount. Thus there is a net increase of 10.65 billion tones of atmospheric carbon dioxide per year. The United States produces roughly one-quarter of all global greenhouse gas emissions in the world with about 5% of world population and about 85% of which comes from energy related carbon dioxide emissions by combustion of fossil fuels [29]. During the last century, global temperature increased by 0.78± 0.18oC. It is believed that due to human-produced increased concentrations of greenhouse gases (water vapor, carbon dioxide, methane, nitrous oxide) in the atmosphere, this has caused global temperature increases despite other natural reasons such as volcanic eruptions and variation of earth's orbit around the sun. Figure 13 shows the increase in the global mean earth surface temperature anomaly relative to 1961– 1990. Based on the latest Intergovernmental Panel on Climate Change (IPCC) report, global surface temperatures will be likely to rise a further 1.1 to 6.4 °C (2.0 to 11.5 °F) during the twenty-first century [30]. Political and public debate continues in order to take appropriate action to global warming. Most national governments have signed and ratified the Kyoto Protocol, which is aimed at reducing greenhouse gas emissions. Figure 14 presents the comparison for emission reduction from 38 developed counties under the Kyoto Protocol in contrast to developing and least developed countries with no binding commitments in the first binding period. The biggest boost in emissions has taken place in developing countries, mainly in China and India. In 2006, China became the largest emitter surpassing the US. India is expected to be third largest emitter by overtaking Russia [31].
Figure 13. The global mean earth surface temperature anomaly relative to year 1961–1990 [30] Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
218
Erkan Topal and Shahriar Shafiee
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 14. Historical emission share of the world [31].
As of February 2009, 183 nations signed and ratified the Kyoto Protocol aimed at reducing global warming. The agreement requires those nations to reduce the green house gas emission by 5.2% compared to the year 1990. There is no doubt that the energy sector will be the major player and fossil fuels will be the central victim in curbing emission. There are few available options to meet the future energy needs as well as emission targets. The first and most likely option is to continue using fossil fuels with a CO2 capture and storage (CCS). CCS process consists of separating the CO2 from industrial and energy-related sources, transporting it to a storage location and isolating it for the long-term from the atmosphere. The carbon capturing process can be performed in three ways. The first is called pre-combustion. This requires separating the CO2 from its original energy source (fuel does not contain it when its burned). The second is called post-combustion. This requires capturing the CO2 before it leaves the power plant but after it is burned off. The last is called oxyfuel combustion. This involves adding pure oxygen to the captured CO2. Thus when they are burned together it makes easier to capture CO2 as waste. The next question to answer is, once the CO2 is captured, where should it be stored so that it will be isolated? Figure 15 presents the possible CSC system and possible storage options. The first option is storing the CO2 in the ground. Gases have been stored underground for millions of years, so this process means reversing the extraction process; that is, rather than extracting the gas from the ground, CO2 is simply being put back into the ground. There are successful industrial-scale storage project are in operation. Around 1Mt of CO2 has been injected in the North Sea at Sleipner since 1996 and in Algeria the In-Salah gas field. Furthermore, 30 Mt of non-anthropogenic CO2 are injected annually in West Texas region and 1-2 Mt CO2 are injected annually on Weyburn project in Canada [33]. Sustainable Energy of Ireland announced on September, 2008 that the power stations which capture and
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
General Overview for Worldwide Trend of Fossil Fuels
219
store carbon instead of releasing it into the atmosphere could become economically viable in Ireland in five years time. The captured CO2 can be stored in geological vaults such as the former Kinsale gas field [34].
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 15. Schematic diagram of possible CCS systems showing the sources for which CCS might be relevant, transport of CO2 and storage options (Courtesy of CO2CRC) [33].
Figure 16. Average Annual growth Rates by Energy Source (2002-2007) [39].
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
220
Erkan Topal and Shahriar Shafiee
The second option is burying the CO2 into the ocean. Towards the end of the 20th century, an experiment was conducted by Monterey Bay Aquarium Institute to see if deep ocean carbon storage was feasible. In this experiment, the researchers injected several liters of liquid CO2 into a glass beaker at a depth of 3,600 meters. The liquid carbon dioxide spilled over the top of the beaker where it bounced to the seafloor and was carried easily away by the currents. It was recognised that large amount of CO2 might affect the ocean's ecosystem badly. There are however other proposals to store the CO2 in the ocean floor. As it can be seen above some promising experiments and industrial-scale storage project have been conducted and established, but the overall capacity range needs to be increase from a couple million tonnes a year to one billion tonnes a year in order to be successful [35] [36] [37] [38]. The second one is to use more renewable and nuclear energy intensively. Clean energy will play a crucial role for future energy needs. Renewable energy technologies have been advancing steadily since the late 1970s. In the recent years, clean energy projects such as wind, hydro, solar and geothermal have attracted considerable attention. Today renewable energy sources supply almost 20% of the world's electricity. As it can be seen in Figure 16, solar and wind capacity are annually expanding at 40% and 24% respectively. Solar cells annual production rose 51% in 2007. In regions such as California and Italy, this energy is becoming cost-competitive with the retail price of electricity within the next three years. In 2007, 40% of new generating capacity installation is belong to wind in Europe and 35% in USA. Total capacity has passed 100 gigawatts in early 2008 which is doubled the amount in 2004. Geothermal energy, heat stored in the earth, is another important source of energy even thought it provides only 10 GW of world power. Another energy source for the future could be nuclear plant. Concerns about climate change, hight fossil fuel prices and government subsidies have recently refresh interest in nuclear power. Today, it provides around 2% of world energy and 15% world's electricity with a 372 gigawatts (GW) capacity. The main issues with nuclear power as a substitute for the fossil fuels are plant safety, radioactive waste disposal and fuel for manufacturing nuclear weapons. Furthermore, nuclear power plant requires large capital investment (cost twice as much as coal plant to build and five times that of a natural gas plant) and long leading times (planning, licensing, and constructing a single nuclear plant usually takes10 to 15 years) which makes the option riskier than other substitutes.
Figure 17. World primary energy supply from available sources [2]. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
General Overview for Worldwide Trend of Fossil Fuels
221
Even though atmospheric pollution and the depletion of fossil fuels leaves renewable energy sources as the most effective and preferable alternative for the future energy needs, they will not have a significant share in energy needs in near future. Figure 17 presents the world primary energy demand from available sources. At it can be seen from the figure, total renewable energy sources will be able to supply less than 20% of world energy demand by 2030. They are still not cost efficient and they need government sponsorship to help generate enough momentum in the market like government tax subsidies, partial copayment schemes and various rebates over purchase of renewables. There are also concerns and criticisms about renewable energy applications, such as they may create pollution, be dangerous, take up large amounts of land, or be incapable of generating a large enough net amount of energy. The third option is to increase energy-use efficiency that is making home, business and cars more energy efficient can be another strategy and option along with the others to sustain and meet the future energy needs. Firstly, this can be achieved by producing energy efficient appliances such as refrigerators, ovens, stoves, and dishwashers. Modern appliances use significantly less energy compare to older ones, for example current energy efficient refrigerators use 40% less energy than conventional models did in 2001. Secondly, energy-use efficiency can be achieved by making energy efficient buildings. One of the greatest potential areas for energy savings lie in buildings, which consumes 40% of global energy and emit a considerable share of CO2. With today's available technology, energy needs for the buildings can be reduced by 70% or more by better installation, more-efficient lighting and appliances, improved doors and windows etc. In recent years, "Green Buildings" which minimizes the use of energy as well as other environmental impacts have attracted considerable attention around the world. For example the U.S. Green Building Council, which developed a set of voluntary standards, now has more than 15,000 member organizations. European countries are moving rapidly to green buildings with a strong governmental support [40]. Thirdly, energy can be used better by making the industry energy efficient. Another opportunity to increase energy productivity can be in the extensive use of combined heat and power(CHP), also known as cogeneration. In industry, when electricity is generated, the heat which is produced as a by-product can be captured and used for process steam, heating or other industrial purposes. For example, in the United States, the waste heat from power plants is equivalent to all of the energy consumed in Japan. Finally, energy efficiency can be achieved by making energy efficient vehicles. Cutting edge design including reducing vehicle weights and using more advance tires may reach twice the fuel efficiency of the average automobile. Another growing trend in automotive efficiency is the increasing trend on hybrid and electric cars. For example, Mitsubishi MIEV is a zero-emissions vehicle. Even CO2 emissions at the power plant that generate the power needed for charging the car is considered, it emits approximately only 30% of the CO2 that a gasoline mini car would emit. Plug-in hybrids cars, like the Toyota Prius, make it possible to drive for limited distances without burning any gasoline. In this case, energy efficiency is dictated by whatever process (coal-burning, hydroelectric, etc) creates the power. Figure 18 shows industrial scale good examples for energy efficiency. Based on Mandil 2008, world emissions are around 24 billion tonnes per year and increasing by 500 million tonnes a year. In order to keep average temperature increase to 2oC, we need a reduction of one billion tonnes per year. To achieve that target, every year we need to close 300 coal-fired power plants with a 500MW capacity which is not possible. One solution could be to avoid consuming this electricity. If we replace all incandescent light
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
222
Erkan Topal and Shahriar Shafiee
bulbs with fluorescent bulbs worldwide, this would reduce CO2 emissions by one billion tonnes for a given year. What however would be the solution for the other 1 billion tonnes for the next year? There is no silver bullet that will generate a sustainable energy supply for the next generation. Only a combination of all the options mentioned would meet energy targets for the future i.e., using carbon capture, using energy more efficiently and using more nuclear and renewable energy [28].
Figure 18. Examples for energy efficiency (Compact fluorescent light bulb, Green Building, zeroemissions vehicle) [40], [41].
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Conclusions There is no doubt that fossil fuels will continue to account for the largest portion of world energy supply for the next couple of decades. The consumption of fossil fuels has been increasing over the last half century and this trend is expected to continue until 2030. Do we have enough fossil fuel resources in the world to supply future energy needs? Views about world fossil fuel reserves differ and it is difficult to predict exactly when supplies will be exhausted. It has been estimated that fossil fuel reserve depletion times for oil, coal and gas are approximately 35, 107 and 37 years based on a continuous compounding rate. This means that coal reserves are available up to 2112 and will be the only fossil fuel remaining after 2042. Coal is not only the most abundant fossil fuel, but it is also the most widely distributed source of energy. The resources of available fossil fuels will fluctuate with its price. Fossil fuel price prediction however is a dilemma. There is a big difference between predictions of fossil fuel prices over the last couple of decades and actual prices. One of the main reasons for this variance is that the price of fossil fuels depend on many unpredictable variables. World energy demand is expected to grow roughly around 1.6% annually that will account an increase of 45% till 2030. There are few available options to meet the future energy needs as well as emission targets. The first option is to continue to use fossil fuels but also use with a CO2 Capture and Storage (CCS). Even though some promising experiments have been undertaken and government investments have been allocated, more fast and efficient research needs to be done in order to bring this option from millions to billions of tonnes a year. The second option is to use more renewable and nuclear energy more intensively. There is no doubt that clean energy projects such as solar, wind, hydro and geothermal will be a major attractions for satisfying future energy needs. More research and governmental subsidies however are required to make this option economically viable and attractive. Although nuclear power still needs public acceptance in regards to safety and waste
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
General Overview for Worldwide Trend of Fossil Fuels
223
disposal, it may be a necessary option for some countries. It is expected that this option can supply 20% of energy needs by 2030. The third option is to use energy more efficiently. There are good examples of applications currently being used that use energy more efficiently such as fluorescent bulbs, hybrid and electric cars and green buildings. This option is a key solution to future energy challenges with an expectation of more improvements and examples that this area will continue to grow. In conclusion, there is no silver bullet that can meet future energy needs. The solution for future energy sustainability is a combination of alternatives, such as using energy more efficiently, using carbon capture technologies, and using more nuclear and renewable energy.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
References [1] Goldemberg, J. (2006). "The promise of clean energy." Energy Policy, 34, 2185-2190. [2] IEA. (2008). "World Energy Outlook 2008." Paris and Washington, D.C. Organisation for Economic Co-operation and Development, International Energy Agency. [3] BP. (2008). "BP Statistical Review of World Energy 2008”. British Petroleum. [4] IEA. (2006). "World Energy Outlook 2006." Paris and Washington, D.C. Organisation for Economic Co-operation and Development, International Energy Agency. [5] WCI. (2006). "COAL: LIQUID FUELS." World Coal Institute. [6] Shafiee, S. & Topal, E. (2009). "When will fossil fuel reserves be diminished?" Energy Policy, 37(1), 181-189. [7] http://www.energystar.gov/index.cfm?c=cfls.pr_cfls. [8] Zittel, W. & Schindler, J. (2007). "COAL: RESOURCES AND FUTURE PRODUCTION." Ottobrunn: Energy Watch Group. [9] WEC. (2007). Survey of Energy Resources." London: World Energy Council. [10] WCI. (2007). "COAL MEETING THE CLIMATE CHALLENGE." World Coal Institute. [11] Lior, N. (2008). "Energy resources and use the present situation and possible paths to the future." Energy, 33, 842–857. [12] EIA. (2007). "International Energy Outlook 2007." Washington, Energy Information Administration. [13] EIA. (2008). "Annual Energy Outlook 2008 With Projections to 2030." Washington, Energy Information Administration. [14] Shafiee, S. & Topal, E. (2008). "An econometrics view of worldwide fossil fuel consumption and the role of US." Energy Policy, 36(2), 775-786. [15] IEA. (2006). "World Energy Outlook 2006." Paris and Washington, D.C. Organisation for Economic Co-operation and Development, International Energy Agency. [16] Bachmeier, L. J. & Griffin, J. M. (2006). "Testing for Market Integration Crude Oil, Coal, and Natural Gas." Energy Journal, 27(2), 55-71 [17] Simmons, M. R. (2005). Twilight in the desert : the coming Saudi oil shock and the world economy. Hoboken, N.J: John Wiley & Sons. [18] Simpkins, J. (2009). "The “Cheap Oil Era” is Ending Soon." ed. Money Morning: http://www.moneymorning.com/2009/01/10/cheap-oil-era/. [19] EIA. (2009). "Annual Energy Outlook 2009 With Projections to 2030." Washington, Energy Information Administration.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
224
Erkan Topal and Shahriar Shafiee
[20] Slade, M. E. (1982). "Trends in Natural-resource commodity prices: An analysis of the time domain." Journal of Environmental Economics and Management, 9, 122-137. [21] Slade, M. E. (1985). "Noninformative trends in natural resource commodity prices Ushaped price paths exonerated." Journal of Environmental Economics and Management, 12, 181-192. [22] Slade, M. E. (1988). "Grade selection under uncertainty: Least cost last and other anomalies." Journal of Environmental Economics and Management, 15, 189-205. [23] Mueller, M. J. & Gorin, D. R. (1985). "Informative trends in natural resource commodity prices: A comment on Slade." Journal of Environmental Economics and Management, 12, 89-95. [24] Berck, P. & Roberts, M. (1996). "Natural resource prices: will they ever turn up?" Journal of Environmental Economics and Management 31:65-78.; Ahrens, W. A. and Sharma, V. R. 1997. "Trends in Natural Resource Commodity Prices: Deterministic or Stochastic?" Journal of Environmental Economics and Management, 33, 59-74. [25] Lee, J., List, J. A. & Strazicich, M. C. (2006). "Non-renewable resource prices: Deterministic or stochastic trends?" Journal of Environmental Economics and Management, 51, 354-370. [26] Shafiee, S. & Topal, E. (2007). "Econometric Forecasting of Energy Coal Prices." In Australian Mining Technology Conference. Western Australia: CRC Mining. [27] Shafiee, S. & Topal, E. (2008). "Introducing a New Model to Forecast Mineral Commodity Price." In 1st International Future Mining Conference & Exhibition. Sydney, Australia: The University of New South Wales.). [28] Mandil, C. (2008). "Our energy for the future" S.A.P.I.E.N.S. Volume 1 Issues, http://sapiens.revues.org/index70.html. [29] http://www.ens-newswire.com/ens/apr2006/2006-04-18-02.asp. [30] http://en.wikipedia.org/wiki/Global_warming. [31] http://www.globalcarbonproject. org/about/index.htm). [32] http://en.wikipedia.org/wiki/Kyoto_Protocol#cite_note-90. [33] IPCC Special Report on Carbon Dioxide Capture and Storage, 2005, 18th Session of IPCC Working Group III, Montreal, Canada. [34] http://www.irishtimes.com/newspaper/ireland/2008/0918/1221599468840.html. [35] http://www.cnn.com/NATURE/9905/10/oceans.enn/, http://science.howstuffworks.com/bury-co2-in-ocean2.htm [36] Wilson, J. E., Morgan, M. G., Apt, J., Bonner, M., Bunting, C., Figueiredo, M. D., Gode, J., Jaeger, C. C., Keith, D. W., McCoy, S. T., Haszeldine, R. S., Pollak, M. F., Reiner, D. M., Rubin, E. S., Torvanger, A., Ulardic, C., Vajjhala, S. P., Victor, D. G & Wright, I. W. (2008). "Regulating the Geological Sequestration of Carbon Dioxide." Environmental Science & Technology. [37] Stephens, J. C. & Keith, D. W. (2008). "Assessing Geochemical Carbon Management." Climatic Change, 90, 217-242, 42, 2718-2722. [38] http://science.howstuffworks.com/bury-co2-in-ocean1.htm. [39] Flavin, C. (2008). "Low Carbon Energy: A Roadmap." World watch Report 178, Worlwactch Institute, Washington, DC, USA. [40] http://en.wikipedia.org/wiki/Efficient_energy_use#cite_note-app-4. [41] http://www.lbl.gov, http://www.mitsubishi-motors.com/special/ev.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
In: Advances in Energy Research, Volume 1 Editor: Morena J. Acosta, pp. 225-240
ISBN: 978-1-61668-994-0 © 2010 Nova Science Publishers, Inc.
Chapter 7
SCENARIO DISCOVERY AND TEMPORAL ANALYSIS FOR ENERGY CONSUMPTION FORECASTING OF THE BRAZILIAN AMAZON POWER SUPPLIERS
1
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
2
A. Cláudio Rocha1,a, L. Ádamo de Santana2,b, A.B. Guilherme Conde2,c, R. Carlos Francês2,d and L. Nandamudi Vijaykumar3,e University of the Amazon, Av. Alcindo Cacela, 287, 66060-902, Belém, PA, Brazil Laboratory of High Performance Networks Planning, Federal University of Pará R. Augusto Côrrea, 01, 66075-110, Belém, PA, Brazil 3 Laboratory of Computing and Applied Mathematics, National Institute for Space Research, Av. dos Astronautas 1758, Jd. Granja, 12227-010, São José dos Campos, SP, Brazil
Abstract Usually, power distributors estimate the energy consumption based on the historical values of the consumption alone. This consideration, however, tends to compromise the accuracy of predicted values; particularly in areas like the Amazon region, that are very susceptible to climate and economic variations. With this in mind, an useful tool, for the power suppliers, could be made available to allow establishing metrics for measuring the impact that other random variables (e.g. economic and climatic) have on the variation of the energy consumption; so that it would be possible to predict scenarios and the progression of their behavior, in order to achieve a more economic, safe and reliable setting for the supplier. This paper presents a new model, based on mathematical and computational intelligence techniques, in order to meet these needs. Particularly, we considered the peculiarities of regions like the Amazon. The contributions of this work are threefold: first, with respect to the establishment of correlations among economic, climate and the energy consumption data, by a
E-mail address: [email protected]. E-mail address: [email protected]. c E-mail address: [email protected]. d E-mail address: [email protected]. e E-mail address: [email protected]. b
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
.
226
A. Cláudio Rocha, L. Ádamo de Santana, A.B. Guilherme Conde et al. using Bayesian networks (BN); second, a model is introduced to explore the discovery of scenarios, implementing a hybrid algorithm, combining genetic algorithms and Bayesian inference, thus allowing decision-makers to estimate which economic conditions favor the occurrence of a given target of energy consumption; third, with respect to the forecasting of the consumption by developing a new model for temporal analysis of the envisioned scenarios and inferences, which applies probabilistic Bayesian models with Markovian driven temporal analysis. From the models developed, it was possible to create a complete decision support environment for managers of the power suppliers, providing means to establish more advantageous energy contracts in the future market and analyze the favorable scenarios based on the climatic variations and the social and economic conditions of a given region.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
1. Introduction It is a fact that electric sector has been playing a major role in the development of any region or country. In particular, for countries such as Brazil with huge forest, Amazon region, for example, due to its economic situation, depends on a stable supply of quality energy. This must be considered as a sine qua non condition in order to promote a sustainable development as well as to promote a social inclusion of its population. Companies responsible to generate and distribute energy face increasing challenges to innovate processes, equipment and actions/methods in order to deal with obtaining economic advantages, reducing climate changes associated to greenhouse effect, promoting sustainable development and educating the society to avoid wasting electrical power. Therefore, investigation of methods, techniques and tools that my support decision processes in the electrical sector gains importance and it has become an important theme of research in both national and international scenarios. Naturally, this support to decision making may be developed by employing several methods; however, methods employing intelligent systems appear as one of those that present more robust results. Such methods may fall into the category of Data Mining (DM), also known as Knowledge Discovery in Database (KDD). This category consists of mature technologies, widely incorporated into organizational processes in several corporations. DM may be understood as a non-trivial, interactive and iterative processes to identify comprehensibly valid, new and potentially useful patterns from huge data sets [1]. Several techniques using DM can be found in the published literature. Among such techniques, one of the most prominent that can deal with interpreting knowledge from domains with uncertainty is Bayesian networks (BN). They have a mechanism to represent causal model of a given set of data [2], allowing both qualitative and quantitative analyses among variables of a domain and thus providing support to decision making from the following inference types: diagnostic, causal, intercausal and combination of these three [3]. Despite their advantages, Bayesian networks have their drawbacks. It is not possible, for example, to correlate the time factor. Moreover, it is not possible to establish a combination of an optimal state (optimal set of states – scenario) for a given set o variables in the domain of interest to obtain a given objective (desirable state for a certain variable within the domain of interest). So, it is essential to deal with the following issues: scenario discovery and establishing a correlation between socio-economic variables and variables related to climate with respect to energy consumption, considering the time factor. So, an original method has been developed to deal with such issues. Scenario discovery is dealt with by employing the combination of
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Scenario Discovery and Temporal Analysis for Energy Consumption Forecasting…
227
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Genetic Algorithms and Bayesian networks obtained from the datasets. Now, with respect to correlation, time series analysis based on information from a Bayesian network in a Markov chain has been implemented. As a case study to validate the proposed method, information from two electrical power companies from the States of Pará and Tocantins (both from Amazon region) are used. Socioeconomic data were provided by the two States while the climate information was provided by the Brazilian National Institute for Space Research (INPE). Discovery of such scenarios is of valuable importance to the power sector as they provide correlation analysis between socio-economic and climate factors that influence the energy consumption. This will enable planning and operating this sector in a safe and reliable manner. The possibility of correlating these factors with relation to time is an interesting and essential point as it will enable to decision makers to anticipate the future behavior of the domain variables. Moreover, these analyses should aid in decision making of electrical power sectors related to power marketing. Such analyses enable estimating quantifying consumption of electrical power besides elaborating plans to negotiate the purchase/sale of electricity. Such plans are essential as these companies are subject to fines if they fail to underestimate or overestimate power consumption. Te remaining of this work is organized as follow: section 2 presents the context for applying the methods developed, that is, to establish scenarios of energy consumption, considering the climatic and socio-economic characteristics of the Amazon Region. Section 3 presents the method for discovery of scenarios, based on the optimization strategy and the time correlation of socio-economic and climatic variables with the energy consumption. The final considerations of this work are presented in section 4.
2. Forecast of Electric Power Consumption in the Brazilian Amazon The Amazon region has a very particular feature with respect to electricity consumption. This is due to its huge land consisting of dense forest and several parts are still with no access (or with poor access) to electricity. Added to this, climate conditions with heavy rains in certain periods of the year and socio-economic conditions exclusively concentrated in extracting sector exert major influence on the initiatives conducted by the power companies and public policies. It is also essential to measure the impact other random variables (temperature, humidity, socio-economic factors, among others) exert on the consumption so that it is possible to forecast scenarios with economical, safe and quality operation of the power. A decision support system, Predict, has been proposed and developed to run a forecast on the use of electrical power as well as to correlate variables that are not part of the electrical system, such as climate and socio-economic conditions. The main concern is to design and implement a computational system consisting of mathematical and computational intelligence methods to predict the necessities of purchasing electricity from the future market, besides performing inferences from historical data of consumption and their correlations with climate and socio-economic data.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
228
A. Cláudio Rocha, L. Ádamo de Santana, A.B. Guilherme Conde et al.
Section 2.2 shows the basic architecture of Predict with its modules and data sources from the States of Pará and Tocantins to discover scenarios and time series analysis of electricity consumption. The proposed method's main utility is to run a forecast so that better commercialization conditions of electricity are enabled. Correct forecasting is quite fundamental to the success of commercial transactions and in order to illustrate this aspect, the scenario of electricity commercialization in Brazil is briefly discussed in the following Section.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
2.1. Process of Commercialization of Electricity in Brazil The Commercialization of Electricity in Brazil follows the parameters from Law 10848/2004, by Decrees nº 5163/2004 e nº 5.177/2004 (that established Commercialization Chamber of Electrical Energy-CCEE) and by another Resolution from National Agency of Electrical Power-ANEEL nº 109/2004. The relations among CCEE agents that generate, distribute and commercialize electricity are governed by contracts of purchase/sale of electricity and all the contracts must be duly registered in CCEE. CCEE calculates the differences between what was produced and consumed and what was contracted. The positive and negative differences are liquidated in Short Term Market and their values follow Prices for Liquidation of Differences (PLD). So, it can be said that short term market is the market of differences between the contracted figures and measure figures. By centralizing electricity purchase by CCEE, the risks faced by distributors are reduced as these are not any more responsible to work on generation designs and consequently they can aim at buying only the necessary demand. On the other hand, there is a considerable risk in estimating a wrong figure for the demand as these distributors have to face some penalties [4]. Penalties are based on a stochastic component with a high degree of volatility – PLD. The distributors have no interference on PLD as it depends on several factors such as reservoir levels, hydrothermal system expansion, etc. As there is a variation of PLD, there is a considerable risk in being penalized due to wrong estimations. Wrong forecasts in purchasing electricity may negatively affect the distributing companies' operations. If they underestimate, they are fined and moreover, they must buy electricity on a short notice (to satisfy the demand), which rises the costs. If they overestimate, they also undergo fines applied by ANEEL. Besides, it is not always possible to pass on the costs due to wrong estimations. ANEEL establishes that power distributors must anticipate, by means of public auctions, the necessary purchase. The distributors are subject to follow some guidelines to charge the customers. If the necessary power counts to less than 100% of the purchased figure, the distributors are subject to penalties. If it is between 100% and 103%, they are allowed to pass the total volume of purchased energy to the customers. In case the purchased energy is more than 103%, the companies have to be prepared to assume a risk in not passing on the difference (between the purchased figure and sales figure in short term market) to the customers.This section showed the uncertainties in the electricity market and these uncertainties indicate that it is essential to use scientific methods to identify consumption
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Scenario Discovery and Temporal Analysis for Energy Consumption Forecasting…
229
scenarios based on variables within and external to the electrical system. These scenarios are very useful to favorably contract the purchase of energy, besides enabling the governing bodies to establish public policies and investments to develop certain areas based on the resulting correlations between consumption and socio-economic factors. The developed methods have been incorporated into Predict and the next Section shows its architecture.
2.2. Architecture of the Decision Support System Predict Figure 1 shows a version of Predict architecture that is useful as a Support to Decision Making. The elements of this architecture are divided into the following modules (sub-systems): •
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
• •
Data Management: contains data acquisition system and data pre-processing used in Data Mining. Knowledge Management: consists of DM algorithms to detect patterns. Interface Management: rational use of visualization tools of information and/or extracted knowledge from data to facilitate the decision making personnel.
Figure 1. Basic Architecture of Predict.
Data Management considers three sources of data identified in Figure 1: •
Power Consumption: data coming from corporate (companies that serve two States of the Amazon Region) data bases. Historic information of CELPA (State of Pará Electricity Company) and CELTINS (State of Tocantins Electricity Company) with
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
230
A. Cláudio Rocha, L. Ádamo de Santana, A.B. Guilherme Conde et al.
•
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
•
11 attributes (month/year measured, residential, commercial, industrial, rural, government consumption, public illumination, public services, consumption of the company, total consumption and required electricity) measured on a monthly basis. For the studies conducted for this chapter, the period of January 1991 to August 2009 (CELTINS) and January 1989 to August 2009 (CELPA) were considered. Socio-Economic: Government data vary according to those made available by the States (Pará and Tocantins). In case of Pará State, five attributes, monthly measures, were used (total revenue, average value of dollar, number of employees hired by transformation industries and number of employees hired by agriculture-related industries). For the State of Tocantins, twelve attributes (exports of grains, seeds, oil generating fruits, vegetable oil, sugar, minerals, leather, skins, wood, iron and steel, Service taxes, Transfers to the Central Government, GNP (R$) and GNP (US$). The period chosen for the study is January 2000 to August 2009 (CELTINS) and January 1999 to August 2009 (CELPA). Climate: Climate data considered are related to monthly measures of minimum and maximum temperatures, rainfall index, relative humidity totaling 4 attributes. Analyses conducted have considered the data in the context of the State; however, Bayesian networks generated by Predict also analyzed correlation of climate factors with the consumption in some towns where climate data were collected. Both the States contributed with the data for the period of January 2000 to August 2009.
It is important to stress that the use of climate and socio-economic variables was based on suggestions from domain specialists (market analysts and company engineers). All the variables (energy consumption, climate and socio-economic) were divided into 10 states (range of values). For example, residential consumption was divided into 10 ranges of values in which range 1 being the least and range 10 the highest. However, the number of states of the variables can be different as is the case with the number of ranges. In the Knowledge module, two streams of analysis are provided: •
•
Forecast: forecasts consumption of energy that will be charged (and its several classes) and medium-term (1 to 2 years) and long-term (more than 2 years) required energy. In order to achieve this, mathematical regression methods, artificial neural networks besides hybrid neuro-genetic methods were used. Correlation: responsible to correlate socio-economic, climate and electricity data. Climate and socio-economic influence over the consumption of electricity is conducted by Bayesian networks and by hybrid methods that combine Bayesian networks with Markov chains and Genetic Algorithms.
Finally, the Interface Module displays the results in a user-friendly fashion. Figure 2 shows the interface of Scenario generation module. The example illustrates the ideal scenario to maximize the commercial consumption by using climate data and billed consumption, separated by classes, of CELTINS company. Now, that Predict system was briefly explained, Section 3 shows how the scenario discovery method was employed and impact analysis, in time, of socio-economic factors of correlations among data within the context data bases within Predict. All of the results obtained from Bayesian networks are based on the K2 algorithm of search and score [5].
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Scenario Discovery and Temporal Analysis for Energy Consumption Forecasting…
231
Figure 2. Graphical interface of Predict.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
3. Discovery of Socio-Economic and Climate Scenarios for Optimization of Energy Consumption The discovery of scenarios that are conducive to achieving a particular goal is of utmost importance to support the process of decision making. For example, determine which socioeconomic scenario corroborate with obtaining a target value of total energy consumption, defined by the user. The method developed is aimed at subsidizing decision making users with methods to analyze, in advance, the scenarios that can lead to achieving a certain goal.
3.1. Hybrid Method for Scenarios Discovery The method implemented here aims to identify the best configuration, among the possible values of variables in the domain, corroborating the achievement of a target value for one(or more) variable(s) in the domain in question. For this, we used a hybrid method that combines the probabilistic and correlation power of BNs, with the ease of GAs for the incorporation of specific knowledge of the problem, in order carry out optimization tasks. The interaction between these two computational intelligence techniques (GA and BN) occurs as follows. As can be seen in Figure 3, the process of scenario discovery starts with supplying the BN, generated from the data, and its parameters; then, a GA is applied using as fitness function for the individuals (scenarios) the actual inference engine of the BN; at the end of its iterations, the optimal scenario to achieve a particular goal is obtained.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
232
A. Cláudio Rocha, L. Ádamo de Santana, A.B. Guilherme Conde et al.
In Figure 3, P(X|E) represents the probability of obtaining a particular state of X (target variable), given the set of remaining variables in the domain E. Thus, the scenarios (configuration of states for variables E) represent the individuals of the GA, which are evaluated (fitness function) by the probability of obtaining the goal X. That is, the probability P(X|E) of occurrence of each scenario is provided as input to the BN method of inference, returning as output the value for this query. As mentioned previously, this value is used as fitness function for the individuals (scenarios) of the genetic algorithm (GA). Thus, the GA starts with the random generation of initial population I, consisting of a set of candidate scenarios, which are then evaluated by the method of inference of the BN; in order to obtain the fitness of all scenarios, measured by the probability of obtaining the target value of the queried variable X, given a particular configuration of states (scenario) of the variables of evidence E. The process continues with the selection of individuals, through the method of roulette. Next, we apply the operators of crossover, with crossover rate Tc; and mutation, with a mutation rate Tm. The process is repeated for n generations. For the discovery of scenarios of energy consumption, we will take as example the search for the optimal socio-economic scenario that maximizes the residential energy consumption of the power supplier. The BN generated for this analysis considers 18 attributes (socioeconomic and energy consumption per class).
GA (Operators)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
BN (structure / parameters)
ƒ(P(X|E))
Scenario discovered
P(X|E) Inference engine of the BN
Figure 3. Representation of the method for discovery of scenarios.
Table 1. Most likely scenario for maximizing the residential consumption Order A01 A02 A03 A04 A05 A06 A07 ... A18
Variable RURAL_MWh RESIDENTIAL_MWh PUBLIC_ILLUMINATION_MWh ExpSugar_ton ExpSeeds_and_Fruits_ton ExpWood_ton INDUSTRIAL_MWh ... COMMERCIAL_MWh
State 7 M 7 1 5 8 7 ... 9
Applying the method developed, the following scenario (Table 1) was found, the maximum probability of 0.833617 of occurrence of the target (maximum residential power Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Scenario Discovery and Temporal Analysis for Energy Consumption Forecasting…
233
consumption, or RESIDENTIAL_MWh = 10). Noting that the target variable is set with its target value, in the case RESIDENTIAL_MWh = 10. Table 1, this state is represented by M. In the example, RESIDENCIAL_MWh = 10 is considered the target, stressing that it would be possible to choose any state of this, or other variables from the BN. The GA acts on the method of inference from the RB to find the setting that fosters the achievement of RESIDENCIAL_MWh = 10, with the maximum likelihood. Thus, the individual found by the GA that represents the scenario (optimal) to obtain the maximum value of residential consumption (RESIDENCIAL_MWh = 10) was the one presented in Table 1. In which the first position defines the state 7 to the variable RURAL_MWh; the second, state 7 of variable PUBLIC_ILLUMINATION_MWh 7; the third, state 1 for Exp_Sugar_ton; and so on, forming a favorable scenario to obtaining the maximum value of residential consumption. Another analysis applying the method developed is shown next. Now we seek to find the best scenario; this time, not only the range of discretized states, but the actual value within these ranges, for all the variables involved. To demonstrate this functionality, for simplicity, and based on the knowledge of domain experts, the number of variables was reduced, according to their impact in the variation of power consumption; they are: number of employments in the sectors of the transformation industries and agriculture and cattle breeding, and the values of the total turnover and of the dollar. We point out that their influence reflects directly not only to the total power consumption, but also to the many classes of consumption (residential, industrial, commercial etc). Given the knowledge that the variables of number of employments in the transformation industries (emp_ind), employments in the agriculture and cattle breeding (emp_agro), value of the total turnover (val_turn) and the value of the dollar (val_dol) are the main influences in the variation of the power consumption, they were used in the next step, which consisted in the creation of a BN (Figure 4). In the BN, all the attributes were discretized in ten states, according to the frequency of their values, allowing us to verify the probability associated to each one of them, as well as the conditional probabilities existing among the variables.
Figure 4. BN created from the variables.
Once the network is set, the next step is, by making use of the data given by the BN, to search the network attributes for the states that would maximize the power consumption. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
234
A. Cláudio Rocha, L. Ádamo de Santana, A.B. Guilherme Conde et al.
As mentioned previously, each of the individuals of the genetic algorithm represents an inference configuration of the BN, generated randomly (e.g. evidencing the variables emp_ind with state 2, emp_agro with state 1, val_turn with 7 and val_dol with 4 generates the individual 2-1-7-4). Each individual is then, for its classification, submitted to the Bayesian inference module in order to verify the probability in which the power consumption attribute would be maximized, obtaining, at the end of the iterations, the best possible configuration of inferences on the BN for the maximization of the power consumption. However, we would have at the end of this step (after the genetic algorithm analysis) only the respective states (i.e. band of values) for this maximization, instead of a single value (for each attribute), which is what we seek. Following this phase, we make use, again, of a genetic algorithm; but this time a traditional genetic algorithm, whose fitness function we obtain from the data. The function used for the genetic algorithm is obtained from a regression of multiple variables [6,7] made over the attributes of the BN. The multivariate analysis is however made over the consumption data, but considering only the data instances located within the ranges found in the previous step. Thus, we obtain an equation (presented below) with a good representativity (approximately 0.9039) over the domain. Y = 258,598,510.5+3,675.6834 X1+4,430.9036 X2 + 0.4701 X3 - 12,182,208.61 X4
(1)
where Y represents the power consumption and X1, X2, X3 and X4 represent the values of the attributes emp_ind, emp_agro, val_turn and val_dol, respectively. Table 2. Values of the attributes for the maximization of the consumption
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Attribute emp_ind emp_agro val_turn val_dol
Value 5.380 3.357 R$ 100.752.576,00 R$ 2,861
Table 3. Parameters used in the algorithms Parameters Initial population Number of generations Selection Crossover Crossover rate Mutation rate Elitism
Values 50 individuals 1,000 Roulette One point 98% 0.1% Yes
Based on Equation (1), the GA is then used, thus obtaining the values, for each of the attributes that would maximize the power consumption. It is worth mentioning again that the individuals evaluated by the fitness function are only those within the range of values that maximize the value of consumption. Thus, in order to achieve the occurrence of the maximum consumption, it is necessary that the values in Table 2 are achieved, for the attributes emp_ind, emp_agro, val_turn and val_dol.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Scenario Discovery and Temporal Analysis for Energy Consumption Forecasting…
235
The GAs used were, basically, parameterized according to the values in Table 3. The representation used for the individuals, however, was different. The first GA used a representation with size based on the number of possible states that the variables of the BN could assume; and the second one used a binary representation. Other tests specifying different values for the parameters were also made; the results obtained, however, did not present any significant alteration. It is worth mentioning that the optimization model used is restricted not only to the discovery of the maximum values of consumption, but can also be used to identify scenarios that cause a minimum, average or any other value to be achieved by the power supplier, given the variation of the considered economic aspects. Next section discusses the other module developed, which establishes the correlation of climate and socio-economic variables, considering the time factor.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
3.2. Correlation and Time Analysis of Socio-Economic and Climate Variables with the Energy Consumption In this work, the time analysis is implemented by treating the data model and characteristics proceeding from a Bayesian network into a Markov chain. The idea is to establish an isomorphism between a Bayesian network in time and a discrete time Markov chain. The model used seeks to analyze the forecast using the concepts of Hidden Markov Models (HMM); with respect to its theoretical foundations and assumptions regarding nonregular Markov models and being governed by probability distributions. The time domain is modeled in a simplified way with the Markovian time transition according to a 1st order process, but also intrinsically considering, in its transitions, the other variables of the domain that might also influence in the behavior of this attribute. That is, just as a Markov chain, a Bayesian network can be seen as a matrix of attributes that are correlated and that also has an influence over each other throughout time [8]. In the context of correlation analysis on energy consumption, we apply this method to establish the study of correlations between energy consumption and climate factors, within in a monthly time scale. From the consumption and climatic data, it was possible to obtain the Bayesian network shown in Figure 5. The attributes refer to the types of energy consumption (residential, commercial, industrial and public) and the observed climatic factors (temperature, relative humidity and rainfall). The analysis considered as example of the implemented model seeks to study the change in the probabilities for the variable commercial consumption, given an inference of the increase in rainfall; also assuming a constant increase in the period of six months. This way, for the proposed scenario, the implementation of a temporal connotation becomes essential, and its use imperative when conducting a probabilistic analysis of the behavior of attributes over the stipulated period; allowing us to study and argue the trends in the model at each instant of time.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
236
A. Cláudio Rocha, L. Ádamo de Santana, A.B. Guilherme Conde et al. Months
Rainfall
MaxTemp
MinTemp
RelHumid
Industrial
Residential
P_Illum
Public
Commerc
Figure 5. BN correlating the energy consumption with the climatic factors.
Table 4. Marginal probabilities of variables Rainfall and Commercial Rainfall
Commercial 0,192 [126.918 →148.047) 0,192 [148.047 →160.840)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
[1,497 → 32,408) [32,408 → 43,422) [43,422 → 88,154) [88,154 → 161,583) [161,583 →315,292]
0,192 0,192
0,192 [160.840 →174.684 ) 0,192 [174.684 →195.908)
0,192
0,230 [195.908 →219.649]
0,230
0,192
The variable rainfall, used to infer the pattern of the BN, is continuous by nature; its values were, however, discretized into five ranges of values, from a minimum of 1.479 up to 315.292 mm; the variable commercial, which represents the energy consumption in MW in the commercial sector has, in turn, its values also discretized into five states, ranging from 126,918 to 219,649. The discretized states and their probabilities are shown in Table 4. The calculations of the probabilities that will serve as basis for the Markov transition matrix would follow: n
∑ P(s
y
pxy =
| sx ∩ Pai ) × P(Pai )
i=1 m n
∑ ∑ P(s
j
| sx ∩ Pak ) × P(Pak )
j =1 k =1
Where:
pxy corresponds to the probability of transitioning from state x to state y s is the observed variable and its respective states; Pa is the variable that represents the attributes on which variable s is dependent
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
(2)
Scenario Discovery and Temporal Analysis for Energy Consumption Forecasting…
237
n is the number of possible states and/or combinations that the parents of the
attribute can assume; m is the number of states the attribute can assume.
Calculating from (2), we obtained the Markov transition matrix (represented by the letter P), presenting the transition probabilities for the states of the variable studied, as shown in Table 5. The discretized states, presented in Table 4, are, for simplification, represented by labels ( C1 to C5 ,), according to the set of values the each state represents. The computed matrix presents the transition probabilities between the states of the considered variable. Furthermore, to find the transition probabilities at a given time n, we need only to calculate the nth power of the probability matrix P Equations of Chapman - Kolmogorov [9].
(n )
, as described by the
P (n) = P (m) × P (m−n) where P
(n )
(3)
is the transition matrix in the step n; and thus P
(n)
= Pn .
Table 5. Markovian transition matrix of the variable commercial consumption
C1
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣
C1 6
P = C2 C3 C4 Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
C5
C2
C3
C4
C5
0,371 0,319 0,049 0,078
0,371 0,191 0,238 0,078
0,086 0,391 0,427 0,205
0,086 0,049 0,143 0,360
0,086 0,049 0,143 0,278
0,116
0,116
0,116
0,301
0,351
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦
Table 6. Markovian transition matrix after the transition of six units of time C1
⎡ ⎢ ⎢ ⎢ ⎢ ⎣
C1
P
6
= C2
C3 C4 C5
0,181 0,179 0,177 0,175 0,176
C2 0,205 0,203 0,201 0,199 0,199
C3 0,266 0,265 0,264 0,262 0,262
C4 0,176 0,177 0,181 0,184 0,183
C5 0,171 0,172 0,175 0,177 0,177
⎤ ⎥ ⎥ ⎥ ⎥ ⎦
This way, considering that the time unit is discretized in months, to obtain the probabilities values for the commercial consumption in six months time, we would need only 6
to calculate the power P of the matrix (Table 6) Finally, in order to go back from the Markovian transition matrix to the marginal probabilities of the variable, we apply (4). n
P(sxt ) = ∑ pix × P(sit −1) i=1
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
(4)
238
A. Cláudio Rocha, L. Ádamo de Santana, A.B. Guilherme Conde et al.
By applying the equation (4), we return again to the marginal probabilities for the considered analysis, identifying the following distributions for the states of the variable Commercial: C1 = 0,1776 ; C 2 = 0,2014 ; C3 = 0,2638 ; C4 = 0,1802 ; and C5 = 0,1744 .
Resulting in an adjustment in the probabilities of events and further evidence of consumption in the intermediate state, that ranges from 160,840 to 174.684MW Thus, with the advent of this time modeling, it was possible to expand the scope of the features of Bayesian networks, providing management users of the electricity sector with an extension in the results on the temporal behavior and an expanded range of possible tests to apply; which were not initially possible due to the original formalism of BN not being sufficiently adequate to meet all these demands and provide a temporal depth to the domain.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
5. Final Remarks One of the points sought by power suppliers is the ability to plan the purchase/sale of electricity at a future time, given the great variation to which this market is exposed. Usually, distribution companies forecast the energy consumption based only on the historical values of the consumption. However, the accuracy of predicted values is compromised, especially in regions that are expanding their networks and are very susceptible to climate and/or economic changes, such as the Amazon region. Thus, a useful tool for power suppliers should establish metrics for the impact of other random variables on the power consumption; such that it is possible to predict scenarios in which the settings for operating the electric system are economic, safe, reliable and of good quality. In this horizon, the electricity sector presents frequent demands regarding the need to establishing scenarios to quantify the impact that certain economic and climatic variables have on the energy consumption; and, furthermore, to discuss, in the scenarios discovered, the influence of these variables and discover the values (states) of economic and climatic variables that can induce the achievement of a given goal in the power consumption. In order to meet these demands and, especially, establish a mechanism to generalize the process of analytic interpretation of the correlations among the variables, a new method was created, based on a combination of two widely used techniques in Data Mining - Bayesian Networks and Genetic Algorithms. For this task, the Bayesian networks are generated from economic, climate and energy consumption historical data. Once discovered correlations, from the BN, between the variables that directly influence the energy consumption, scenarios can be established, allowing decision-making users to predict which economic conditions (state of economic variables) favors the achievement of a target consumption value. In this context, the focus of this work is to establish a set of strategies to support purchasing decisions of energy in the future market; using as basis for analysis, the correlations between consumption data and climatic and socio-economic variables, particularly focused on the discovery of scenarios that best explain the achievement of a given goal; and measures of correlation between these data, considering the time factor. Thus, the following can be considered as main contributions of this work:
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Scenario Discovery and Temporal Analysis for Energy Consumption Forecasting… •
• •
•
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
•
239
Projection and analysis of temporal correlations of the variables of power consumption, socio-economic and climatic conditions. The method provides benefits arising from the feasibility study and surveys to quantify the impact of inferences on the network over time; Establishing mechanisms to quantify the influence that certain economic and climatic variables have on the consumption of electrical energy; Creating a strategy for finding an optimal scenario based on climatic and socioeconomic variables, in order to achieve a given target value of power consumption. We highlight that the strategy is not restricted only to establish economic and climate scenarios conducive to a target consumption, but also allows the analysis of correlations between the consumption data (e.g. between different categories of consumption); Providing an effective analytical tool for enforcing decisions related to marketing of energy, which forecasts energy consumption and study scenarios to support these predictions; essential aspects to the success in the process of purchasing and selling energy; From the developed method it was possible to create a complete environment for decision support for power suppliers, so that of decision-making users can make more advantageous contracts in the future market; and analyze favorable consumption scenarios, based on climatic and economic variations, for its operations in the sector. Additionally, its usability may also be related to governmental actions in order to, for example, discover scenarios involving energy, climate and economic data that can infer an increase in the generation of employment and income.
Finally, though the studies developed here have been applied on data from suppliers that operate in the Amazonian region of Brazil, it is understandable that the features of the methods developed are of great value, not only for this particular case, but it can also be generalized to other actions and solutions for problems in the power systems domain.
References [1] Fayyad, U; Piatetsky-Shapiro, G; Smyth, P. The KDD process for extracting useful knowledge from volumes of data. Communication of the ACM, 39 (11), 27-34, 1996. [2] Pearl, J. Probabilistic reasoning in Intelligent System, Morgan Kaufmann Publishers 1988. [3] Russel, S; Norvig, P. Artificial Intelligence-a modern approach. Prentice Hall, 2003. [4] Barros, M; Mello, M; Souza, R. Acquisition of energy in the Brazilian captive market: simulations of the effects of regulation on the risk of distributors. Operational Research, vol. 29, n.2, 303-322, 2009. [5] Cooper, G; Herskovitz, E. A Bayesian Method for the Induction of Probabilistic Networks from Data, Machine Learning, 9, 309-347, 1992. [6] Dillon, Wr; Goldstein, M. Multivariate analysis - methods and applications. John Wiley & Sons, 1984. [7] Hair, JF; Anderson, RE; Tatham, RL; Black, WC. Multivariate data analysis. PrenticeHall. 1998.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
240
A. Cláudio Rocha, L. Ádamo de Santana, A.B. Guilherme Conde et al.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
[8] Santana, AL; Rocha, CAJ; Francês, CRL; Rêgo, LP; Vijaykumar, NL; Carvalho, SV. DE; Costa, Jcw. Strategies for improving the modeling and interpretability of Bayesian networks. Data & knowledge engineering, (63), 91007 2007. [9] Bolch, G; Greiner, S; Meer, HDE; Trivedi, KS. Queuing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, John Wiley & Sons, Inc, New York, USA, 1998.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
In: Advances in Energy Research, Volume 1 Editor: Morena J. Acosta, pp. 241-257
ISBN: 978-1-61668-994-0 © 2010 Nova Science Publishers, Inc.
Chapter 8
EFFICIENT LOW POWER SCHEDULING FOR HETEROGENEOUS DUAL-CORE EMBEDDED REAL-TIME SYSTEMS Pochun Lin and Kuochen Wang Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Abstract In recent years, heterogeneous dual-core embedded real-time systems, such as personal digital assistants (PDAs) and smart phones, have become more and more popular. In order to achieve real time performance and low energy consumption, low power scheduling for such systems becomes a critical issue. Most of researches on low power scheduling with dynamic voltage scaling (DVS) were targeted at only one CPU or homogeneous multi-core systems. In this chapter, we propose a low power scheduling algorithm called Longer Common Execution Time (LCET) for DVS enabled heterogeneous dual-core embedded real-time systems, which includes two steps. First, we reduce the total execution time of tasks by using LCET in heterogeneous dual-core embedded real-time systems. Second, we exploit the reduced total execution time to adjust voltage and frequency levels to further reduce the total energy consumption. Simulation results show that the proposed P-LCET (a preemptive version) and NP-LCET (a non-preemptive version) can effectively reduce the total energy consumption by 8% and 16% ~ 25% (13% and 33% ~ 38%) compared with the work by Kim et al. with (without) dynamic voltage scaling.
Keywords: Dynamic voltage scaling, embedded real-time system, heterogeneous dual-core, low power scheduling, total execution time.
1. Introduction With more and more multimedia applications, low energy consumption is extremely important for heterogeneous dual-core embedded real-time systems, like the PDA and smartphone. Most mobile handhelds are dual-core systems [6]. A dual-core system is mainly
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
242
Pochun Lin and Kuochen Wang
composed of an ARM processor and a DSP. Figure 1 shows a heterogeneous dual-core architecture. Like the OMAP processor, which was manufactured by TI (Texas Instruments) for mobile applications, includes two cores, ARM926 (ARM9 core) and TMS320C55X (DSP coprocessor) [12]. The Freescale i.300-30 processor also includes two cores: ARM11 and StarCore SC140 (DSP processor) [13]. In order to conserve energy for battery-powered real-time systems, several low power techniques were proposed. Dynamic voltage scaling (DVS) and dynamic power management (DPM) have been employed as available techniques to reduce the energy consumption of CMOS microprocessor systems [1]. The DVS is a design technique to adjust the CPU’s supply voltage and frequency. The primary design goal is to exploit the slack time. Since in battery-powered systems, the battery lifetime impacts the utility and duration of the system directly, reducing the energy consumption and extending the battery lifetime should be a primary design metric [16]. We know that the energy consumption E of a CMOS circuit is dominated by its supply voltage and is proportional to the square of its supply voltage, which is expressed as
E = C eff ⋅ Vdd2 ⋅ C
Ceff [2], where is the effective switched capacitance, Vdd is the supply voltage, and C is the number of execution cycles. Reducing the supply voltage also drops the maximum operating frequency proportionally ( Vdd ∝ f ). Thus, E could be approximated as
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
2
being proportional to the operating frequency squared ( E ∝ f ). Therefore, lowering operating frequency and according supply voltage is an effective technique for reducing energy consumption [14]. In real-time systems with periodic tasks, no deadline miss is an important requirement of the systems. For example, embedded real-time systems must complete the tasks before their deadlines to maintain the system stability. Energy-efficient scheduling for hard real-time tasks on DVS processors is to minimize the energy consumption, while all the real-time tasks are done in time. Dual-core Processor ARM
DSP
Memory
I/O
Figure 1. Heterogeneous dual-core architecture [6].
In this chapter, we focus on low power scheduling for heterogeneous dual-core embedded real-time systems. In contrast to most of existing low power scheduling approaches that were targeted at only one CPU or homogeneous multi-core systems, we consider low power scheduling for heterogeneous dual-core embedded real-time systems. The rest of the chapter is organized as follows. Section 2 includes DVS and scheduling preliminaries. Section 3 reviews related work. The system model and the proposed design
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Efficient Low Power Scheduling…
243
approach are described in Section 4. Simulation results are discussed in Section 5 and conclusions and future work are given in Section 6.
2. Preliminaries DVS exploits the slack time to adjust the CPU frequency and voltage levels in order to reduce the energy consumption and guarantee all tasks completed before the deadlines. Therefore, a good slack time estimation method is very important to reduce energy consumption.
2.1. Categories of Inter-Task DVS Strategies There are two categories of DVS algorithms [3]: inter-task DVS and intra-task DVS. The inter-task DVS algorithm adjusts the CPU frequency task-by-task, which allocates the slack time between the current task and the following tasks. And the intra-task DVS algorithm adjusts the CPU frequency within a task, which uses the slack time when a task is predicted to complete before its worst-case execution time (WCET). In this chapter, we consider inter-task DVS algorithms for periodic tasks, which usually exploit one or more of the following four strategies to estimate the slack time.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
(1) Minimum constant speed [3] [7] This strategy is defined as the lowest possible clock speed that guarantees the feasible scheduling of the task set. (2) Stretching to NTA [3] [7] This strategy is based on that the scheduling already knows the next task arrival time (NTA) of periodic tasks. (3) Priority-based slack stealing [3] [7] Not all the execution times of tasks are in the worst cases. If high priority tasks complete earlier than their WCETs, the next lower priority task can use the remaining slack time to adjust the frequency. (4) Utilization updating [3] [7] The utilization updating technique estimates the required processor performance at the current scheduling point by recalculation the expected worst case processor utilization using the actual execution times of completed task instances.
2.2. Priority Scheduling Existing real-time scheduling policies can be classified into rate-monotonic (RM) scheduler and earliest-deadline-first (EDF) scheduler. Both of them are dynamic scheduling.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
244
Pochun Lin and Kuochen Wang
(1) Rate-Monotonic scheduling (RM) The RM scheduling is the fixed-priority scheduling. It always gives the highest priority to the task which has the shortest period in the ready queue. (2) Earliest-Deadline-First scheduling (EDF) The EDF scheduling is the dynamic-priority scheduling. It always gives the highest priority to the task which has the latest deadline in the ready queue
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
3. Related Work Recently, to achieve high computation performance and lower energy consumption, the researches on multi-core embedded systems have become more and more popular [4] [5] [6] [10] [11] [15]. There are two categories of multi-core architecture. The cores that are symmetric in a given chip package is called a homogeneous multi-core; otherwise, it is called a heterogeneous multi-core for asymmetric processors in a chip package. In homogeneous multi-core systems, Alon et al. [4] showed that the total energy consumption of applications with single-thread (ST) is different from that of multi-thread (MT). The energy consumption by using an MT code is twice less than that by an ST code in corresponding performance states and reduces half of the total execution time in Intel Core Duo systems. In the real-time loop scheduling problem, Chen et al. [15] proposed the retiming and rotation algorithms. By reducing the task schedule length and using the slack time, it can reduce more energy consumption. In the periodic hard real-time tasks scheduling problem on heterogeneous dual-core systems composed by ARM and DSP cores, Gai et al. [5] proposed a mechanism that divides the tasks into two groups: regular and DSP, where the regular is the tasks without DSP workload and the one with DSP workload is called DSP, and each group has its corresponding queue: regular and DSP queues. It can increase the schedulability bound in the considered architecture and allow a more efficient use of the computational resources without still maintaining some kind of real-time guarantee. However, there are two problems in [5]. One is that the high priority DSP tasks will be blocked by the low priority DSP tasks and the other is that a regular task with low priority can be executed earlier than a DSP task with high priority. Due to these two problems, Kim et al. [6] proposed a new scheduling model using only a queue combined with regular tasks and DSP tasks ordered by priorities. It has better schedulability and fewer deadline misses. Chen et al. [17] proposed an on-line dual-core (processor and coprocessor) scheduling framework for dynamic workloads with real-time constraints. However, the low power issue was not addressed. In task critical problems with the mixed workload composed of periodic and aperiodic real-time jobs on a heterogeneous distributed real-time embedded system, Marcus et al. [10] proposed an energy-efficient genetic list scheduling algorithm (EE-GLSA) and an energyefficient genetic list mapping approach (EE-GMA) algorithm to get each aperiodic job’s task mapping and find the shortest tasks scheduling length. It has higher energy reductions compared to previous DVS scheduling approaches based on constructive techniques and total energy savings for mapping and scheduling optimized DVS systems. In the valid power-efficient scheduling based on task critical path analysis, Luo et al. [11] showed that the static and dynamic variable voltage scheduling algorithms in hard and soft
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Efficient Low Power Scheduling…
245
real-time systems have hard task deadlines miss in heterogeneous distributed real-time embedded systems. In summary, Table 1 shows a qualitative comparison of several existing low power DVS algorithms, and the proposed P-LCET (preemptive longer common execution time) and NP-LCET (non-preemptive longer common execution time) algorithms for heterogeneous dual-core embedded real-time systems. The metric of multi-core type describes if the multi-core is homogeneous or heterogeneous. The metric of total energy consumption indicates the CPU total energy consumption using each algorithm. The metric of number of preemptions indicates the frequency of preemptions. The metric of average waiting time indicates the average queuing time of tasks. The metric of deadline miss indicates the real-time tasks can not complete before the time constraint. In Section 5, we will compare our proposed P-LCET and NP-LCET with the work by Kim et al. [6], since it has no deadline miss. Table 1. Qualitative comparison of related work Algorithm Intel dual-core [4] ILOSA [15] EE-GLSA [10] S-and-D [11] Gai et al. [5] Kim et al. [6] P-LCET (proposed) NP-LCET (proposed)
Multi-core type Homogeneous Homogeneous Heterogeneous Heterogeneous Heterogeneous Heterogeneous Heterogeneous Heterogeneous
Total energy consumption Low Low Medium Medium High High Low Lowest
Number of preemption N/A N/A N/A N/A Medium Medium Medium Low
Average waiting time N/A N/A N/A N/A Medium Low Low Medium to High
Deadline miss N/A N/A N/A N/A Yes No No Yes
Cipost
Cipre Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
CPU DSP
C iDSP Figure 2. Heterogeneous dual-core task model.
4. Proposed Low Power Scheduling Algorithm 4.1. System Model The target architecture is a heterogeneous dual-core embedded real-time system, composed by ARM and DSP cores, that can change their supply voltage and operating frequency continuously within its operational ranges, [Vmin , Vmax ] and [ f min , f max ] , and all cores need to be executed at the same frequency [8]. A task set T of n periodic tasks is denoted as T = {T1 , T 2 , T 3 ,..., T n } . Each task Ti has its own period pi and worst-case execution time (WCET) wi . The deadline d i of Ti is assumed to be equal to its period pi .
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
246
Pochun Lin and Kuochen Wang
The j th instance of Ti is denoted by Ti , j . Each task releases its instance periodically and all tasks are assumed to be mutually independent [14]. The heterogeneous dual-core task model we used for low power scheduling with DVS is illustrated in Figure 2. Each task executes for C i units of time on the master CPU, and may DSP
request a DSP activity for C i request, after C i
Ci = Ci
pre
+ Ci
pre
post
. We assume that each task performs at most one DSP
units of time, and then executes for other C i
post
units, such that
[6].
Besides, in considering the support for power saving, the ACPI (Advanced Configuration and Power Interface) specification [9] is the most used technique for the CPU. The processor power states of ACPI is shown in Figure 3 and detailed descriptions are shown as follows.
Performance State Px
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
C1
C0
Throttling
C2
C3
G0 Working Figure 3. Processor power states of ACPI [9].
4.2. State Definitions [9] G0 Working In this state, peripheral devices (peripherals) are having their power states changed dynamically. C0 Processor Power State While the processor is in this state, it executes instructions. P0 Performance State While a device or processor is in this state, it uses its maximum performance capability and may consume maximum power.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Efficient Low Power Scheduling…
247
P1 Performance State In this performance state, the performance capability of a device or processor is limited below its maximum and consumes less than maximum power. Pn Performance State In this performance state, the performance capability of a device or processor is at its minimum level and consumes minimal power while remaining in an active state. And the hardware support constraint in our assumption with ACPI [9] is the same as that of [8]. All cores that follow the ACPI in one physical package and reside in the same power domain must execute at the same performance state (P-state). It means if one core is busy running a task at P0, other cores in that package can't enter lower P-states.
4.3. Problem Statement In this chapter, we propose a longer common execution time (LCET) algorithm to reduce the total execution time of tasks and total energy consumption in heterogeneous dual-core embedded real-time systems. From [4], we know that if tasks could be executed more concurrently in the multi-core, the total execution time of the tasks and the total energy consumption can be decreased. ARM
Task 1
DSP ARM
Task 2
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
DSP
Figure 4. An example: two different tasks.
Arrival time CPU
ARM
ARM DSP
DSP
DSP
(a) Task 1 executes first. CPU DSP
ARM
ARM DSP
DSP
(b) Task 2 executes first. Figure 5. Different scheduling policies.
To illustrate our LCET algorithm we use an example that has two tasks, task 1 and task 2, with different priorities and execution times to be run in a dual-core, composed of ARM and DSP cores, as shown in Figure 4. Figure 5 shows that different scheduling policies result in different total execution times and total energy consumptions for task 1 and task 2. We
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
248
Pochun Lin and Kuochen Wang
assume the two tasks arrive at the same time. If task 1 has higher priority than task 2 and executes first, we can see that the total execution time would become longer than that if task 2 executes first, and thus have higher total energy consumption. Therefore, using different scheduling priorities for tasks with different structures of Ci
and Ci
pre
DSP
will affect the total
execution time and total energy consumption. To deal with this problem, we propose a priority scheduling algorithm which intends to improve the total execution time and the total energy consumption in heterogeneous dual-core systems.
4.4. Proposed Algorithms: P-LCET and NP-LCET Before describing our algorithms, we first define the scheduling priority as shown in equation (1): (1)
Scheduling priority : C ipre / C iDSP / C ipost
P-LCET:
Input:
T i = {T1 , T 2 , T 3 ,..., T n } U worst = ,
n
∑w
i
i =1
pi .
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Output: Total execution time (TET) and average waiting time (TWT) of tasks. 1. for ( i = 1 to n ) if (Ti) arrives; if ( priority of (Ti) > priority of the running task (Tj) ) Preempt and schedule (Tj) by the scheduling priority in the queue. else Schedule (Ti) by the scheduling priority in the queue. TET = total execution time of the tasks in the queue; TWT = average waiting time of the tasks in the queue;
f ar = ( T ET ) /
n
∑
i =1
wi
f = f ar × U worst × f max Figure 6. Algorithm of P-LCET.
We set task i with scheduling priority C ipre / C iDSP / C ipost < 1 to have higher priority than
C jpre / C DSP / C jpost > 1 . For those tasks with C ipre / C iDSP / C ipost < 1 , we j
task j with set
Ci
the pre
shorter
/C
DSP i
C ipre
/ Ci
post
>1
to
have
higher
priority.
, we set the longer task C
Similarly,
DSP i
for
tasks
with
to have higher priority. The
objective is that we want to find a scheduling solution of tasks with the shortest total execution time.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Efficient Low Power Scheduling…
249
NP-LCET:
T = {T1 , T 2 , T 3 ,..., T n } , Input: i
U worst =
n
∑w i =1
i
pi
. Output: Total execution time (TET) and average waiting time (TWT) of tasks. 1. Set TTH and TTR. 2. for ( i = 1 to n ) if (Ti) arrives; task number++; Schedule (Ti) by the scheduling priority in the ready queue. if (task number = = TTH || TTR expires) TET = total execution time of the tasks in the queue; TWT = average waiting time of the tasks in the queue;
f ar = ( T ET + T WT ) /
n
∑
i =1
wi
f = f ar × U worst × f max
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 7. Algorithm of NP-LCET.
We propose two scheduling algorithms: Preemptive LECT (P-LCET) and NonPreemptive LECT (NP-LCET) with/without consideration of the preemption policy, which are shown in Figure 6 and Figure 7, respectively. In the P-LCET algorithm, a high priority task can preempt the running task with low priority. However, in the NP-LCET algorithm, we need to wait for all the tasks to arrive before executing NP-LCET. We use an example with three tasks (T1, T2 and T3), as shown in Table 2, to illustrate the P-LCET and NP-LCET algorithms. Figure 8(a) shows the scheduling result of the PLCET algorithm. First, T1 arrives and starts running at t = 0. When T2 arrives at t = 6, T2 has a higher priority (5/5) than T1 (4/3), and it preempts T1. When T3 arrives, due to having a higher priority (2/7) than T2 (3/5), T2 will be preempted, and the total execution time is 26 ms. In Figure 8(b), because the NP-LCET algorithm needs to wait for all the tasks to arrive, the execution order of the three tasks is T3, T2 and T1 based on their scheduling priorities 2/7, 5/5 and 10/3, respectively, and the total execution time is 21 ms. Table 2. Example task set Task T1 T2 T3
C pre 10 ms 5 ms 2 ms
C DSP 3 ms 5 ms 7 ms
C post 1 ms 1 ms 1 ms
Period 40 ms 40 ms 40 ms
Figure 8(b) shows that NP-LCET has shorter total execution time of CPU and DSP and lower total energy consumption than P-LCET. Although we know that NP-LCET has a better scheduling result, it will increase the average waiting time due to waiting for the next task to arrive. In embedded real-time systems, in order to ensure that each task can complete its work before its deadline and avoid having longer average waiting time, we use two different
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
250
Pochun Lin and Kuochen Wang
thresholds, called Task-Threshold (TTH) and Timer-Threshold (TTR), as shown in Figure 9, where TTH is the bound of tasks number which can wait and be scheduled in the ready queue and TTR is the time interval which can wait until the next task arrives.
0 CPU
6 T1
8 T2
T3
T2
T1
DSP
18
23
26
3
2
1
T3
T2
T1
(a) An example of P-LCET
0
6
8
26 27
29
CPU T3
T2
DSP
T1
T3
3 2
T2
1
T1
(b) An example of NP-LCET Figure 8. A task scheduling example for P-LCET and NP-NCET.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Task‐Threshold
Timer‐Threshold Queue
Figure 9. Two thresholds used in the ready queue.
Before applying DVS to our proposed algorithms, NP-LCET and P-LCET, we first need to estimate the worst-case utilization U worst , which can be computed by equation (2), as follows: n
U worst = ∑
wi
i =1
pi
(2)
where n is the number of tasks in the task set, wi is the worst-case execution time of task Ti , and pi is the period of task Ti . Based on the scheduling results of the P-LCET and NPLCET algorithms, we use the total execution time (TET), average waiting time (TWT) and the n
worst-case execution time ( ∑ wi ) to derive a frequency adjustment ratio (far), as shown in i =1
equations (3) and (4), respectively: Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Efficient Low Power Scheduling…
251
In NP-LCET:
f ar = ( T ET + T WT ) /
n
∑
i =1
wi
(3)
In P-LCET:
f ar = ( T ET ) /
n
∑
i =1
wi
(4)
And the operating frequency (f) is set according to equation (5)
f = f ar × U worst × f max where
(5)
f max is the maximum CPU frequency.
For the task set in Table 2, if we use the maximum frequency and set f max = 1 to run the task scheduling as shown in Figure 8(a), the total energy consumption is 26 mJ. If we use the n w minimum constant speed, f = U worst × f max , and U worst = ∑ i p = 35 / 40 , we have the i i =1 operating frequency f = 35 / 40 . Notice that by lowering the frequency, the total execution time of tasks will increase. So in Figure 8(a), the total execution time is 29.71 ms and the total energy consumption is 22.75 mJ. In Figure 8(a), using P-LCET, we can n
get f ar = (TET ) / ∑ wi = 26 / 35 . i =1
So
the
operating
frequency
can
be
changed
to
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
f = f ar × U worst × f max = 26 / 40 . Thus, the total execution time is 40 ms and the total energy
consumption is 19.31 mJ. In Figure 8(b), using NP-LCET, we can get the total execution time of 21 ms and the average waiting time of 11 ms (T1 = 15 ms, T2 = 10 ms, T3 = 8 ms and the average waiting time TWT = (15 + 10 + 8) / 3 = 11 ms). So we can get n
f ar = (TET + TWT ) / ∑ wi = 32 / 35 and f = f ar × U worst × f max = 32 / 40 . After we change the i =1
frequency, the total execution time is 26.25 ms and the total energy consumption is 16.8 mJ.
5. Simulation Results and Discussion 5.1. Simulation Model In the simulation, we assume the heterogeneous dual-core can change its operating
[f
,f
]
min max frequency and supply voltage continuously within its operational ranges, and [Vmin , Vmax ] , and all cores need to be executed at the same frequency [8]. The task sets were generated using random parameters with uniform distribution with the following characteristics [5] [6]:
• •
The number of tasks was chosen as a random variable from 10 to 50. Task periods were generated from 10 to 100 ms.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
252
Pochun Lin and Kuochen Wang • •
Tasks which perform at most one DSP request needs to be executed. The worst-case execution times were selected in such a way that the worst-case n
utilization
∑w i =1
i
p i varied from 0.01 to 0.99.
was generated to be a random variable with uniform distribution in the range
•
C iDSP
• •
of 10% to 80% of tasks. TTR was set to 10 ms. TTH was ranging from 2 to 6.
In the following discussions, we have normalized all the simulation results of total energy n
consumption to that of all tasks with the worst-case execution time
∑w i =1
i
using the maximum
frequency, f max .
5.2. Effects of Worst-Case Utilization on Total Energy Consumption
Normalized Total Energy Consumption
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 10 compares the total energy consumption of NP-LCET under different TTH and that of P-LCET under different worst-case utilization U worst .
0.8 Kim et al. [6]
0.6
P-LCET 0.4
NP-LCET (2) NP-LCET (6)
0.2
NP-LCET (4)
0 0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1
Worst-Case Utilization
Figure 10. Total energy consumption under different worst-case utilization values.
• •
•
P-LCET reduced the total energy consumption by an average of 8% compared with Kim et al. [6] using minimum constant speed. NP-LCET reduced the total energy consumption by an average of 16%, 25%, 20%, compared with Kim et al. [6] using minimum constant speed as TTH was set to be 2, 4, and 6, respectively. The result with TTH = 6 had the worse total energy consumption than that with TTH = 4 because higher average waiting time will affect the derivation of f ar .
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Efficient Low Power Scheduling…
253
5.3. Effects of Worst-Case Utilization on Average Waiting Time Figure 11 compares the average waiting time of NP-LCET under different TTR and that of P-LCET under different worst-case utilization U worst . •
Average Waiting Time (ms)
•
NP-LCET increased the average waiting time by an average of 40%, 71% and 113%, compared with Kim et al. [6], as TTR was set to be 2, 4, and 6, respectively. From simulation results, we know that NP-LECT will have more average waiting time due to that NP-LCET needs to wait until the number of arrival tasks is equal to TTR.
30 25
NP-LCET (6)
20
NP-LCET (4)
15
NP-LCET (2)
10
Kim et al. [6] P-LCET
5 0 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Worst-Case Utilization
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 11. Average waiting time under different worst-case utilization values.
5.4. Effects of Worst-Case Utilization on Deadline Miss Figure 12 compares the deadline miss of NP-LCET under different TTR and that of PLCET under different worst-case utilization U worst . •
The percentage of tasks completed before the deadline using NP-LCET decreases slightly when the worst-case utilization increases. This is because that when TTH or the worst-case utilization increases, the average waiting time of each task will also increase. So it will result in the increase of deadline miss.
5.5. Effects of Task Number on Total Energy Consumption Figure 13 compares the total energy consumption of NP-LCET under different TTR and that of P-LCET under different task numbers. The worst-case utilization was set to 95%. •
We observed that the total energy consumption improvement of the Kim et al. [6] and P-LCET are almost no change when the number of tasks increases. It is because
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
254
Pochun Lin and Kuochen Wang
•
that the DVS algorithm relies on the worst-case utilization, but not on the task number. The total energy consumption improvement using NP-LCET decreases conspicuously when the number of tasks in a task set increases. It is because the task arrival frequency increases duo to lower average waiting time and a lower frequency used.
5.6. Effects of Task Number on Average Waiting Time Figure 14 compares the average waiting time of NP-LCET under different TTH and that of P-LCET under different task numbers. The worst-case utilization was set to 95%. •
The average waiting time of NP-LCET algorithms decreased clearly when the number of task increases due to the increased task arrival frequency.
% of Tasks Completed
1.01 1
Kim et al. [6]
0.99
P-LCET
0.98
NP-LCET (2)
0.97
NP-LCET (4)
0.96
NP-LCET (6)
0.95 Worst-Case Utilization Figure 12. Deadline miss under different worst-case utilization values. 0.7 Normalized Total Energy Consumption
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.6 Kim et al. [6] P-LCET NP-LCET (2) NP-LCET (6) NP-LCET (4)
0.5 0.4 0.3 0.2 0.1 0 10
15
20
25
30
35
40
45
50
Number of Tasks
Figure 13. Total energy consumption under different numbers of tasks.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Efficient Low Power Scheduling…
255
Average Waiting Time (ms)
25 20
NP-LCET (6) NP-LCET (4)
15
NP-LCET (2) 10
Kim et al. [6] P-LCET
5 0 10
15
20
25
30
35
40
45
50
Number of Tasks
Figure 14. Average waiting time under different numbers of tasks.
6. Conclusions and Future Work
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
6.1. Concluding Remarks In this chapter, we have presented two efficient low power scheduling techniques, called P-LCET and NP-LCET, for heterogeneous dual-core embedded real-time systems. Our design approach was based on the heterogeneous dual-core architecture, task scheduling and DVS techniques. The main contribution of the P-LCET and NP-LCET is that the two proposed scheduling algorithms for heterogeneous dual-core systems have better power saving and less total execution time than existing approaches. The proposed NP-LCET has better power saving and no preemption overhead, but it has higher average waiting time and deadline misses. The proposed P-LCET has less power saving than NP-LCET, but it has lower average waiting time and no deadline miss. However, the overhead of P-LCET is an increase in the number of preemptions, which results in increased energy consumption and number of clock cycles. Fortunately, these overheads are usually small enough and can be neglected.
6.2. Future Work In this chapter, the proposed LCET can apply two different policies, preemptive LCET (P-LCET) and non-preemptive LCET (NP-LCET). We know that to have shorter total execution time and lower total energy consumption for all tasks is to adopt NP-LCET. But it will have higher average waiting time. Although P-LCET does not have better power saving than NP-LCET, its average waiting time is lower. How to integrate these two policies to have low total energy consumption, low waiting time, and considering the mixed workload as well is our future work. The performance and energy consumption of the integrated approach deserve to further study.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
256
Pochun Lin and Kuochen Wang
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
References [1] Miao, L., Qi, Y., Hou, D., Wu, C. I. & Dai, Y. H. (2007). “Dynamic power management and dynamic voltage scaling in real-time CMP systems,” in Proceedings of International Conference on Networking, Architecture, and Storage, 249-250, July. [2] Moyer, B. (2001). “Low-power design for embedded processors,” in Proceedings of IEEE, Volume 89, Issue 11, 1576-1587, November. [3] Kim, W., Shin, D., Yun, H. S., Kim, J. & Min, S. L. (2002). “Performance comparison of dynamic voltage scaling algorithms for hard real-time systems,” in Proceeding of the Eighth IEEE on Real-Time and Embedded Technology and Applications Symposium, 219-228, Sept. [4] “Intel dual-core,” http://www.intel.com/technology/itj/2006/volume10issue02/art03_ Power_and_Thermal_Management/p02_intro.htm. [5] Gai, P., Abeni, L. & Buttazzo, G. (2002). “Multiprocessor DSP scheduling in systemon-a-chip architectures,” in Proceedings of 14th Euromicro Conference on Real-Time Systems, 231-238, June. [6] Kim, K., Kim, D. & Park, C. (2006). “Real-time scheduling in heterogeneous dual-core architectures,” in Proceedings of 12th International Conference on Parallel and Distributed Systems, 1-6, July. [7] Pillai, P. & Shin. K. G. (2001). “Real-time dynamic voltage scaling for low-power embedded operating systems,” in Proceedings of 18th ACM symposium on Operating Systems, 89-102, October. [8] Intel multi-core, http://www.intel.com/technology/itj/2007/v11i4/9-process/6-linuxscheduler.htm. [9] ACPI spec., http://www.acpi.info/DOWNLOADS/ACPIspec30a.pdf. [10] Schmitz, M. T., Al-Hashimi, B. M. & Eles, P. (2002). “Energy-efficient mapping and scheduling for DVS enabled distributed embedded systems,” in Proceedings of Europe Conference and Exhibition on Design, Automation and Test, 514-521, March. [11] Luo, J. & Jha, N. K. (2002). “Static and dynamic variable voltage scheduling algorithms for real-time heterogeneous distributed embedded systems,” in Proceedings of the 7th Asia and South Pacific Design Automation Conference and the 15th International Conference on VLSI Design, 719-726, Jan. [12] OMAP processor, http://focus.ti.com/general/docs/wtbu/wtbuproductcontent.tsp? templateId=6123&navigationId=11991&contentId=4670. [13] Freescale i.300-30 processor, http://www.freescale.com/webapp/sps/site/prod_ summary.jsp?code=i.300-30&nodeId=01J4Fsm6cyDbFf. [14] Chen, J. M., Wang, K. & Lin, M. H. (2007). “Energy efficient scheduling for real-time systems with mixed workload”, in Proceedings of International Federation for Information Processing, 33-44, Dec. [15] Chen, Y., Shao, Z., Zhuge, Q., Xue, C., Xiao, B. & Sha, E. H. M. (2005). “Minimizing energy via loop scheduling and DVS for multi-core embedded systems” in Proceedings of 11th International Conference on Parallel and Distributed Systems, 2-6, July. [16] Yuan, C., Reddy, S. M., Pomeranz, I. & Al-Hashimi, B. M. (2005). “Battery-aware dynamic voltage scaling in multiprocessor embedded system” in Proceedings of the IEEE International Symposium on Circuits and Systems, 616-619, May.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Efficient Low Power Scheduling…
257
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
[17] Chen, Y. S., Chang, L. P. & Cheng, C. M. (2009). “On-line task scheduling for dualcore real-time embedded systems,” in Proceedings of the 7th IEEE International Conference on Industrial Informatics, 182-187, June.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
In: Advances in Energy Research, Volume 1 Editor: Morena J. Acosta, pp. 259-265
ISBN: 978-1-61668-994-0 © 2010 Nova Science Publishers, Inc.
Chapter 9
WORK-ENERGY APPROACH TO THE FORMULATION OF EXPRESSION OF WIND POWER Reccab M. Ochieng∗ and Frederick N. Onyango Department of Physics and Materials Science, Maseno University, P.O. Box 333, Maseno, Kenya
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Abstract This paper touches on a fundamental aspect of wind energy calculation, and goes a head to formulate three expressions of wind power. The paper attempts to answer the question whether the kinetic energy of a unit mass per second is 1/2, 1/3, or 2/3 multiplied by ρv3. The answer to this question is of importance for fluid dynamic considerations in general. The classical formulation of wind energy for turbines is based on the definition of the kinetic energy due to the wind impinging on the turbine blades. The expression of wind energy obtained is directly related to half (1/2) of the specific mass, ρ, multiplied by the cube of wind velocity. Usually the assumption used is that the mass is constant. However, by changing this condition, different results arise. The approach by Zekai [1] based first on the basic definition of force and then energy (work) reveals that the same equation is valid but with 1/3 instead of a factor 1/2. In his derivation, Zakai [1] has not given any reason as to why a factor 2/3 which can be obtained using his approach is not acceptable. We advance arguments to show that three expressions of wind energy are possible through physical formulation.
Keywords: power, wind, energy, velocity, work, force.
1. Introduction Wind energy is the fastest growing source of electricity in the world. Global installations in 2005 reached more than 11,500 megawatts (MW)–a 40.5 percent increase in annual additions compared with 2004–representing $14 billion in new investments [9]. In the United States, a record 2,431 MW of wind power was installed in 2005, capable of producing enough ∗
E-mail address: [email protected] (Author to whom correspondence should be addressed)
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
260
Reccab M Ochieng and Frederick N Onyango
electricity to power 650,000 typical homes [10]. Despite this rapid growth, wind power is still a relatively small part of our electricity supply–generating less than one of global electricity mix. The ability to harness and use wind has seen the development of many technologies such as wind electric power generation plants. In 2005, the global wind markets grew by 40.5 %, generating some 12 billion euro, or 14 billion US dollars, in new generating equipment. While Europe remains the biggest market, other regions such as Asia, North and Latin America are quickly catching up [8]. The move in this direction has been to avert serious negative environmental effects due to fossil fuel usage, and their continuous decrease. With the increase in wind energy costs competing favorably with conventional energy sources, economical advantages are beginning to emerge which make wind power quite attractive such that wind energy farms are gaining prominence as alternative energy source in many developed and developing countries. Even though the amount of wind energy is economically insignificant in many parts of the world, a number of nations have taken advantage of its utilization since early years whenever possible. Water pumping, grinding grains in mills by water, generating electricity have been some of the major applications of wind energy. It is possible to see in some parts of the world these types of marginal benefits from wind power. Recently, the significance of wind energy has been attributed to friendly behavior to the environment in so far as air pollution is concerned although, to some extent, noise pollution has been observed in some modern wind-farms. However, the main advantage of its cleanness seem to outweigh the single disadvantage of noise pollution and wind power is sought wherever possible for many applications with the hope that the air pollution as a result of fossil fuel burning will be reduced [2]. The technology in converter-turbines for the wind energy is advancing rapidly, however, there is a need to assess its accurate behavior with scientific approaches and calculations. The purpose of this paper is to provide some insight by extending the approach for wind energy formulation on the basis of force and then energy definitions [1]. The new formulation provides more physical basis to the derivation of the variations in wind energy calculations.
2. General Conventional Approach to Wind Energy Calculations Wind energy is a form of kinetic energy because of the air movement during wind motion. The kinetic energy is expressed as a basic physical formulation by E=
1 &2 mX 2
(1)
where m is the mass and X& is the wind velocity. This expression is conventionally used for solid mass but in the case of wind, air moves as a fluid. It is therefore necessary to express m in terms of the specific mass, ρ . If the perpendicular area to the wind direction is A , then during time duration
τ , the total amount of air mass that crosses the wind turbine with
velocity X& can be expressed as
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Work-Energy Approach to the Formulation of Expression of Wind Power
261
m = ρAτX&
(2)
Substitution of Eq. (2) into Eq. (1) leads to
E=
1 ρAτX& 3 2
(3)
which is the amount of total wind energy. The wind energy, E wind per unit area per time is by definition
E wind =
E ( Aτ )
(4)
Substitution of Eq. (3) into Eq. (4) leads to the conventional wind power expression
E wind =
1 &3 ρX 2
(5)
Eq. (5) is the universal equation used invariably in all wind energy calculations all over the world. The derivation of this classical expression makes direct use of kinetic energy equation, Eq. (1), in which materials used must have solid mass, hence constant.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
3. Basic Physical Formulation To obtain a more reliable and accurate formulation, we take into consideration the fluid property of the air and hence the density prior to the ready kinetic energy formulation. We start the derivations by assuming that, the total force, F , on the turbine area A due to a wind blow acts for a time duration τ . According to Newton's second law of physics, the force is defined as
F=
dp dt
(6)
where p is the momentum of the material being considered. If we consider the air as a fluid, both the density (mass per unit volume) and velocity can change, the change in density being as a result of the change in velocity. Eq. (6) then takes the form F =m
dX& dm + X& dt dt
(7)
On the other hand, the energy (or work) is defined physically as the multiplication of force by distance, say, dX as
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
262
Reccab M Ochieng and Frederick N Onyango dE = F ⋅ dX .
(8)
Using Eq. (2), Eq. (7) takes the form F = ρAτX&
dX& d ( ρAτX& ) . + X& dt dt
(9)
Substituting Eq. (9) into Eq. (8) leads to ⎛ dX& dX& ⎞ ⎟ dE = dX ⎜⎜ ρAX&τ + X&ρAτ dt dt ⎟⎠ ⎝
(10)
dX dX ⎞ . ⎛ dE = dX& ⎜ ρAX&τ + ρAX&τ ⎟ dt dt ⎠ ⎝
(11)
which simplifies to
If we let dX to be equal to the physical definition of velocity, this expression becomes dt dE = dX& (ρAX&τX& + ρAX&τX& )
(12)
dE = 2 ρX& 2 Aτ dX&
(13)
or The total energy can be obtained after integration of both sides of Eq. (13) resulting in
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
E=
2 ρAτX& 3 3
(
)
(14)
This can be made similar to Eq.(5), the wind power per unit area per time and becomes E=
2 &3 . ρX 3
(
)
A1
(15)
A2
Figure 1. The figure shows a material volume with an area where a force field acts.
4. Discussion Even though Eq. (15) seems to arise naturally by applying physical considerations, Eq. (7) is decisive in accepting it as a tool for wind power calculation. From Newton’s second law of motion, the right hand side of Eq. (7) must have the units of Kg.ms-2. The first term on the
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Work-Energy Approach to the Formulation of Expression of Wind Power
263
right hand side satisfy this criterion, however, it does not seem clear that the second term & & does. On substitution of Eq. (2) into Eq. (7), the second term takes the form X d ( ρAτX ) dt which gives the correct unit of force. This term, however, can be evaluated only by a careful analysis of a control volume in which the mass changes with time. Two types of control volumes exist: a’material’ volume Vmaterial moving with the flow and covering the same mass of flow, and a volume fixed in space V fixed in space. Suppose we have a material volume covering the area where the force field f [N/m3] acts (see Figure 1). This volume is a streamtube, so coincides with the streamlines, except at the inlet and outlet plane ( A1 and A2 respectively) where the normal velocity is zero. The fixed volume, also covering the area where f acts, is also a streamtube, but now with non-zero normal velocity at the inlet and outlet plane. By continuity the mass flow through A1 and A2 is equal. In both cases the streamtube extends so far up- and downstream that the pressure at A1 and A2 is undisturbed:
p = p0 . The work done by the force field is equal to the increase of the kinetic energy of the mass contained in the material control volume. Mathematically,
∫
Vm
f ⋅ vdVmaterial =
d 1 ρv.vdVmaterial . ∫ V m dt 2
(16)
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
When this control volume is changed to the fixed volume, the transport theorem is important [5,6,7]. This reads for a certain quantity Q: d d QdVmaterial = ∫ QdV fixed + ∫ Qvn dS fixed Sf dt ∫Vm dt V f
(17)
where the first term at the right-hand-side gives the time derivative of the integrated Q, and the second term transport of Q integrated at the surface S of V fixed ,with vn the normal component of v . When we assume that V fixed also covers the area where f ≠ 0 , Eq. (17) applied to Eq. (16) results in
∫
Vf
f ⋅ vdV fixed =
d 1 1 ρ ⋅ vdV fixed + ∫ ρv ⋅ vvn dS fixed Sf 2 dt ∫V f 2 =∫
Sf
1 ρ ⋅ vvn dS fixed 2
(18) (19)
The d term at the right hand side of Eq. (18) vanishes because we have steady flow. dt Since the normal flow at the control volume V fixed is non-zero only at the inlet and outlet plane we have:
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
264
Reccab M Ochieng and Frederick N Onyango
∫
Vf
f ⋅ vdV fixed = ∫
A2 f
1 3 1 ρv dA2, fixed − ∫ ρv 3 dA1, fixed . A 1f 2 2
(20)
The work done by the force field per second, the left hand side, equals the increase of the term 1 ρv 3 during the passage through the streamtube. 2
5. Conclusion The difference between Eq. (15) and the conventional expression Eq. (5) and Sekai’s [1] derivation is the numerical factor of 2/3 instead of 1/2 or 1/3. The factor of 2/3 arises as a result of the inclusion of the second term of Eq. (7) in the calculations. However, according to the argument advanced in this work, this term should drop out and a factor of 1/3 obtained due to the continuity and conservation of mass laws. Through consistent arguments and certain laws of physics, Eq. (20) gives a factor of 1/2 but with the apriori that kinetic energy must obey the half mass multiplied by velocity squared. On the other hand, when using the work energy theorem, one starts from the fact that work done on a body gives it a certain amount of kinetic energy. It is therefore important to re-evaluate wind energy calculations to subject the 1/2 or 1/3 factor to thorough experimentation for validity. When a factor of 1/3 is used instead of 1/2 there is about 100/3 percent difference (relative error) between the formulations. If 1/3 is used to calculate the power in the wind using the Betz criterion [3-4], there will be a shift downwards of the energy versus velocity curves.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Acknowledgement One of the authors, Reccab Ochieng would like to thank the Department of Physics, University of Zambia for hosting him during the writing of this paper. He would also like to extend his appreciation to Professor S. F Banda, Dean, School of Natural Sciences, University of Zambia for the support he accorded to him during his stay at the University.
References [1] Zekai S. A short note on a new wind power formulation. Renewable Energy, 28, 2003, 2379-2382. [2] Anderson, M. Current status of wind forms in the UK. Renewable Energy System, 1992. [3] Betz Adie Naturwissenschaften XV, 10th Nov 1927. [4] Shephard, ML; Chaddock, JB; Cocks, FH; Herman, CM. Introduction to Energy Technology. Ann Arbor Publisher Inc., Michigan, 1976. [5] Batchelor, GK. Introduction to Fluid Dynamics. Cambridge University press, 131-136, 1994. [6] Kundu, KP. Fluid Mechanics. Academic press, 75-79, 1990. [7] Faber, TE. Fluid Dynamics for Physicists. Cambridge University press, 37-40, 1995.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Work-Energy Approach to the Formulation of Expression of Wind Power
265
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
[8] Global Wind Energy Council (GWEC), Global wind report (2005). Online at http://www.greenpeace.org.international/press/report. [9] Global Wind Energy Council (GWEC). 2006. Record year for wind energy: Global wind power market increased by 40.5% in 2005. Online at http://www.gwec.net/ index.php?id=30&no_cache=1&tx_ttnews%5Btt_news%5D=21&tx_ttnews%5BbackPi d%5D=4&cHash=d0118b8972. [10] American Wind Energy Association (AWEA). 2006. Windpower outlook 2006. Online at http://www.awea.org/pubs/documents/Outlook_2006.pdf.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
In: Advances in Energy Research, Volume 1 Editor: Morena J. Acosta, pp. 267-314
ISBN 978-1-61668-994-0 c 2010 Nova Science Publishers, Inc.
Chapter 10
E STIMATING E NERGY C ONSUMPTION AND E XECUTION T IME OF E MBEDDED S YSTEM A PPLICATIONS Gustavo Callou∗, Paulo Maciel†, Ermeson Andrade‡, Bruno Nogueira§, Eduardo Tavares¶ and Carlos Araujok Center for Informatics(CIn), Federal University of Pernambuco, Recife, PE, Brazil
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Abstract Over the last years, the issue of reducing energy consumption in embedded system applications has received considerable attention from the scientific community, since responsiveness and low energy consumption are often conflicting requirements. Moreover, embedded devices may also have timing constraints, in the sense that not only the logical results of computations are important, but also the time instant in which they are obtained. In this context, this chapter presents a methodology applied in early design phases for supporting design decisions on energy consumption and performance of embedded system applications. The proposed methodology adopts a formalism for modeling the functional behavior of hardware architectures at a high-level of abstraction. It considers an intermediate model which represents the system behavioral description and, through the composition of these basic models, the scenarios have been analyzed. The intermediate model is based on Coloured Petri Net, a formal behavioral model that not only allows the software execution analysis, but it is also supported by a set of well established methods for property verifications. In addition, this chapter also presents ALUPAS, a software developed for estimating energy consumption and execution time of embedded systems. ALUPAS can provide important insights to the designer about the battery lifetime as well as parts ∗
E-mail address: E-mail address: ‡ E-mail address: § E-mail address: ¶ E-mail address: k E-mail address: †
[email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
268
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al. of the application that needs optimization. Lastly, real case studies as well as customized examples illustrate the applicability of the proposed methodology in which non-specialized users do not need to interact directly with the Petri net formalism. It is also important to highlight that pieces of codes that are either energy or timing consuming were also identified. Moreover, the simulations provide accurate results with much smaller computational effort than measurements on the hardware platform.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
1.
Introduction
Embedded system is the one whose principal function is not computational, but it is controlled by a computer embedded (e.g.: microprocessor or microcontroller) within it [Wil01]. The word embedded means that the computer lies inside the overall system, hidden from view, forming an integral part of a greater whole and, as a result, the user may be unaware of the computers existence [Tav06]. Nowadays, embedded systems are present in practically all areas of human lives. Mobile phones, clocks, refrigerators, microwaves, oscilloscopes and routers are a few examples of those devices that have a digital processor responsible for performing specific tasks. Within such devices embedded applications that have always been running the same tasks are present and, thus, the software updates after being in production are unusual. Besides, embedded systems do not terminate, unless it fails [Lee02]. Depending on the purpose of the application, the design of embedded systems may have to take into account several constraints, for instance, time, size, weight, cost, reliability and energy consumption. Furthermore, advances in microelectronics have allowed for the development of embedded systems with several complex features, thereby upholding the development of powerful mobile mechanism such as military gadgets (e.g.: spy satellites and guide missiles) and medical devices (e.g.: thermometers and pulse-oximeters). These devices generally rely on constrained energy sources (e.g.: battery), in such a way that if the energy source is depleted, the system stops functioning. The power consumption control is also becoming an important design goal in designs that are not battery-operated, because the excessive heat generated from high power consumption can seriously degrade chip performance and cause physical damage to the chip [HZDS95]. Hence estimating energy consumption in early design phases can provide important insights to the designer about the battery lifetime as well as parts of the application that need optimization. Embedded applications that deal with time constraints are classified as Embedded RealTime Systems (ERTS). In these systems, not only the logical results of computations are important, but also the time instant in which they are obtained. Some constraints are considered “hard”, while others are “soft”, meaning the timing deadlines may or may not be violated. In other words, soft ERTS accepts a soft delay to obtain the results (e.g.: web servers, mobile phones, Voice over Internet Protocol (VoIP) Calls, digital TV, web video conferences, and others). On the other hand, in the hard ERTS if the time constraints are not satisfied, a catastrophe may occur (e.g.: car races, health care devices, military applications, aircraft and nuclear control centers)[TMSO08]. Hence, time predictability is an essential issue on the development life cycle of those systems [BL04, TMS+ 07]. Additionally, this chapter is concerned about the adoption of formal models for modeling hard real-time systems with energy constraints as well as the utilization of techniques
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Estimating Energy Consumption and Execution Time of Embedded System...
269
for estimating their energy consumption and execution time. A formal approach, based on Coloured Petri Nets (CPN), for estimating execution time as well as energy consumption of embedded system applications through a stochastic simulation method is presented. The formal mechanism, such as CPN, has being adopted in order to trade off and system’s representation based on abstraction levels that might focus on processor instructions or high-level programming languages, in which applications may be modeled instruction-by-instruction or by blocks of instructions. Without loss of generality, there are two basic approaches based on simulation for estimating embedded software energy consumption: (i) instruction based simulation and (ii) hardware based simulation [NN02]. In hardware simulation, despite the very high computation effort, more accurate results might be obtained in comparison with instruction simulation due to the laborious system specification. However, instruction simulation has been adopted by many works in order to provide energy consumption estimation in a satisfactory period of time. Although there are some works about these methods, to the best of our knowledge, only a small number may represent the embedded applications at a different abstraction level with good accuracy for estimating energy consumption and execution time in a short period of runtime. In addition, performance has been a central issue in the design, development and configuration of systems [Wel02]. The performance as well as power will get more importance if we consider embedded systems with energy and time constraints. In this context, it is not enough to know that systems work properly, they must also work effectively in order to respect their constraints. Studies about performance analysis of systems have been conducted to evaluate existing and/or planned softwares, to compare alternative configurations and to find an optimal system configuration. Thus, being able to estimate the performance and power consumption of a system is important because if such requirements are not satisfied, the system designers can make changes in a very early stage of the design, thereby saving both time and money. The redesign of both software and hardware is costly and may cause late system delivery. Figure 1 shows the error detection costs in a different stages of the development life cycle. Thus, in such illustration is demonstrated that earlier detected errors cost less money for the companies. Moreover, as the system are getting more and more complex, the adoption of formal evaluation models can provide a significant help in order to reduce the global development cost of embedded systems. In order to assure that the embedded system constraints (e.g.: energy consumption and execution time) are preserved, this chapter focuses on providing a methodology that aims at evaluating energy consumption as well as execution time of embedded real-time systems in early design phases. From an Assembly code or C program, models have been built in order to represent the system behavior and compute both energy consumption and execution time of each code instruction. This chapter is organized as follows: Section 2. overviews the main concepts of concern in this chapter, such as embedded systems, real-time systems, and Petri nets. Section 3. reviews the related works, and Section 4. depicts the proposed methodology for embedded system evaluation. Afterwards, Section 5. describes the proposed models for embedded hard real-time systems. Next, Section 6. explains the simulation environment. Section 7. shows experiments conducted using the proposed methodology. Finally, Section 8. summa-
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
270
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al.
Figure 1. Cost to repair a defect according to the stage it is discovered. rize this chapter.
2.
Background
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
This section briefly shows a summary of the background information needed for a better understanding about this chapter. First of all, it is performed an overview of energy consumption and performance evaluation, including measurement techniques and evaluation models. After that, it is presented the system classification. Next, it is shown an overview about Coloured Petri nets (CPN).
2.1.
Energy Consumption and Performance Evaluation
Energy is one of the most important non-functional requirements for embedded system design. It is important to stress that the energy consumption of embedded system depends on the hardware platform and software. The energy design problems can be classified into two groups: (i) analysis and (ii) optimization [Yea98]. Analysis problems are concerned with the accurate estimation of the energy consumption in order to assure that the energy consumption constraints are not violated. The analysis techniques differ in their accuracy and efficiency, in which the accuracy depends on the available design information. In early design phases, the focus should be to obtain energy consumption estimates quickly through little design information. Thus, in such phases, less accuracy results are expected. As the design proceeds, more details are available and more accurate results can be obtained through longer analysis time. Optimization has been considered as the process that improves the design without violating any design specification. An automatic design optimization requires a fast analysis engine to evaluate different design scenarios. On the other hand, manual optimization demands a tool in order to provide energy consumption estimation of different design choices. It is important to highlight that a design decision involves trade-offs from different sources such as the impact to the circuit delay, which affects the performance and throughput of the
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Estimating Energy Consumption and Execution Time of Embedded System...
271
chip, and the chip area, which may increase the manufacturing costs. Furthermore, the design decisions to achieve a low energy consumption may affect other factors such as cycle time, quality and reliability. Nowadays, it is not always enough to know that systems work properly, they must also work effectively. Thus, Performance Evaluation (PE) is often a central issue in the design, development, and configuration of systems. The goals of the PE may be to maximize the throughput of the system, process a given workload for a minimum cost (e.g.: to reduce the energy consumption), or any number of other objective functions [Luc71]. These goals provide the overall environment for evaluation and determine what level of effort can be devoted to the models or measurement techniques that should be applied in order to obtain the performance metric of a system. In addition, performance analysis studies are conducted to evaluate existing or planned systems, to compare alternative configurations, or to find an optimal configuration of a system [Wel02]. The following sections presents an overview of the evaluation models and measurements techniques that have been conducted in order to measure and estimate the energy consumption and execution time of embedded system applications.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
2.2.
Evaluation Models
The performance evaluation can be classified into performance modeling and performance measurement [Joh06]. There are advantages and drawbacks to each of these techniques. The most direct method for performance as well as power evaluation are based on actual measurement of the system under study. Although measurement techniques can provide exact answers regarding the performance and power, during the design phase, the system (hardware prototype) is not always available for such experiments, and yet performance of a given design needs to be predicted to verify that it meets design requirements and to carry out necessary trade-offs [Bol06]. Another drawback of the measurement approach is that performance (energy consumption also) of only the existing configuration can be measured or, in the best cases, it might allow limited reconfiguration through code changing. Furthermore, the measurement results may or may not be accurate depending on the current states of the system, in which such technique has been performed. It is also important to state that a possible solution for such issue could be the adoption of statistical approaches that may guarantee the measurement results. Instead, the computational effort (human also) may turn this solution inadequate. Modeling methods are typically adopted in early stages of the design process, when entire systems or prototypes are not yet available for measurements. Performance modeling may further be divided into simulation-based modeling and analytical modeling. Figure 2 shows the classification of performance evaluation, in which the analytical models deal with probabilistic methods, queuing theory, Markov models, or Petri nets [Joh06]. The basic principle of the analytic approaches is to represent the formal system description either as a single equation from which the interesting measures can be obtained as closedform solutions, or as a set of system equations from which exact or approximate metrics can be calculated through numerical methods [Bol06]. However, in order to be able to have tractable solutions, simplified assumptions are often made regarding the structure of the model and, hence, a compromise between tractability and accuracy is often a challenge. In
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
272
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 2. Performance Evaluation. fact, Jain [Jai91] has observed that “analytical modeling requires so many simplifications and assumptions that if the results turn out to be accurate, even the analysts are surprised”. An alternative to analytical models is the adoption of simulation-based models, where the most popular of them are based on discrete-event simulation (DES) [Bol06]. The results obtained through simulation approaches have not been so accurate as the ones provided by measurements techniques, but it is possible to calculate the estimates precision. The principal drawback of simulation models, however, is the time taken to run such models for large, realistic systems, particularly when results with high accuracy (i.e.: narrow confidence intervals) are desired. Simulation approaches deal with a statistical investigation of output data of both performance and energy analysis, and the verification and validation of simulation experiments. It is important to state that each technique can be adopted in different situation. Thus, the decision of which approach should be adopted depends on each situation. Another characteristic that the reader should be in mind is to create appropriate models containing only needed details to simplify the models.
2.3.
Simulation Process
Simulation is the execution of a model that reproduces the system behavior that it represents. In this context, there are two types of systems: terminal and non terminal. The terminal systems, also called transient systems, are those ones in which there are initial and final states well determined. The non terminal, also named stationary systems, consists of systems that the simulation is finished through a statistical stop criteria evaluation instead of an event that could happen. A stationary simulation approach has been adopted in this work. Figure 3 depicts a general simulation process [LK99]. The simulation starts on the main program which invokes the initialization routine. The initialization routine sets the simulation clock to “0” (variable indicating the current value of simulated time), initializes counters (variables used for storing statistical information about system performance and energy consumption), and starts the event list (list that contains the transition times for each transition able to fire). Afterwards, the main program invokes the timing routine which determines the next event type (the transition that is fired) and advances the simulation clock. Next, the main program invokes the event routine, in which the system state and statistical counters are updated, future events are generated and added to the event list.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Estimating Energy Consumption and Execution Time of Embedded System...
273
Then, it is determined whether the simulation should be finished or not, according to the stop criteria evaluation. After finishing the simulation, the estimates results are showed.
Figure 3. Simulation process diagram.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
2.4.
Coloured Petri Nets
Petri Nets (PNs)[Mur89] are a graphic and mathematical modeling tool that can be applied in several types of systems and allow the modeling of parallel, concurrent, asynchronous and non-deterministic systems. Since its seminal work, many representations and extensions have been proposed for allowing more concise descriptions and for representing systems feature not observed on the early models. Among the Petri net extensions that have been proposed, it is important to stress Jensen’s high-level model, the so called Coloured Petri net (CPN)[Jen95] [JKW07]. In this model, a token may have complex data type as in programming languages; each place has the correspondent data type, hence restricting the kind of tokens that it may receive; the transitions process the token values and create new ones with different values; hierarchy structure can be modeled with different abstraction levels, where each transition may describe another net (called subnet), and so on. Indeed, CPN is a high-level model that considers abstract data-types and hierarchy. The formal definition of Coloured Petri nets is based on the following entity definitions. Definition 2.1. (Multi-set) Multi-set is a function that describes the set of element collections with identical colour (data type). Let N be the set of all non-negative integers. The multi-set M S, defined over a non-empty set S, is a function m : S → N, where: X MS = m(s)′ s s∈S
SM S denotes the set of all multi-sets over S. The non-negative integers {m(s)|s ∈ S} are the coefficients of the multi-set. Definition 2.2. (Multi-set Operations) Let a set of multi-set {m, m1 , m2 }⊆ SM S and n a non-negative integer. The following basic operations are defined among multi-sets: X 1. m1 + m2 = (m1 (s) + m2 (s))’s (addition) s∈S
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
274
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al. X 2. n ∗ m = (n * m(s))’s (scalar multiplication) s∈S
3. m1 6= m2 ⇒ ∃s ∈ S|m1(s) 6= m2(s) (comparison 6=) 4. m1 ≤ m2 ⇒ ∃s ∈ S|m1(s) ≤ m2(s) (comparison ≤) 5. m1 ≥ m2 ⇒ ∃s ∈ S|m1(s) ≥ m2(s) (comparison ≥) X 6. |m| = m(s) (size) s∈S
7. m2 - m1 =
X s∈S
(m2 (s) - m1 (s))’s, iff(if and only if) m2 ≥ m1 (subtraction)
The Formal definition of Coloured Petri nets is presented as follows. Definition 2.3. (Coloured Petri Net) The non-hierarchical definition of Coloured Petri Net [Jen94] is a nine-tuple: P CP N = ( , P, T, A, N, C, G, E, I) satisfying the following requirements: P 1. is a finite set of non-empty types, called colour sets;
2. P is a finite set of elements (Places) that represents local states;
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
3. T is a finite set of elements (Transitions) that depicts events and actions; 4. A is a finite set of arcs such that P ∩ T = P ∩ A = T ∩ A = Ø; 5. N is a node function defined from A into P × T ∪ T × P ; P 6. C is a colour function defined from P into ;
7. G is a guard function defined from T into P expressions such that: ∀t ∈ T : [T ype(G(t)) = Bool ∧ T ype(V ar(G(t))) ⊆ ], where Bool ∈{true, f alse}; 8. E is an arc function defined from A into expressions P such that: ∀a ∈ A : [T ype(E(a)) = C(p(s))M S ∧ T ype(V ar(E(a))) ⊆ ], where p(a) is the place of N (a) and CM S denotes the set of all multi-sets over C;
9. I is an initialization function defined from P into closed expressions such that ∀p ∈ P : [T ype(I(p)) = C(p(s))M S ]. where: T ype(expr) denotes the type of an expression; V ar(expr) denotes the set of variables in an expression; C(p)M S denotes a multi-set over C(p).
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Estimating Energy Consumption and Execution Time of Embedded System...
275
Additionally, it is important to define when a transitions is enabled to fire. Therefore, it is important to stress that a boolean expression (called guard expression) can be attached to transitions. Thus a transition is enabled if each of its input places contain the multiset specified by the input arc inscription and the guard is evaluated to true. The formal definition of enabling a transition is defined as follows. Definition 2.4. Enabled Transitions. A step Y is enabled in a marking M if and only if the following property is satisfied: X 1. ∀p ∈ P : E(p, t) < b >≤ M (p). (t,b)∈Y
Moreover, when a transition is enabled it may occur (fire). An occurrence of a transition removes tokens from places connected to incoming arcs (input places), and adds tokens to places connected to outgoing arcs (output places), thereby changing the marking (state) of the CPN [KCJ98]. The number and colour of the tokens are determined by the arc expressions, evaluated for the occurring bindings. The formal definition is presented as follows. Definition 2.5. Firing of an Enabled Transition. When a step Y is enabled in a marking M1 it may occur, changing the marking M1 to another marking M2 , defined by: X X 1. ∀p ∈ P : M2 (p) = (M1 (p) − E(p, t) < b >) + E(t, p) < b >. (t,b)∈Y
(t,b)∈Y
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
M2 is directly reachable from M1 (M1 [Y iM2 ). where: The expression evaluation E(p, t) < b > computes the tokens which are removed from p when t occurs with the binding b. The expression evaluation E(t, p) < b > computes the tokens which are added to places connected to outgoing arcs with the binding b. 2.4.1.
CPN ML Language
The CPN ML Language is an extension of a well-known functional programming language, Standard ML (SML) [Har08], developed at Edinburgh University. Standard ML is a type-safe programming language that provides a richly expressive and flexible module system for structuring large programs, including mechanisms for enforcing abstraction, imposing hierarchical structure, and building generic modules. Furthermore, such language is portable across platforms and implementations because it has a precise definition. Moreover, CPN Tools [cpn07], a free environment for CPN models, adopts the CPN ML language for declarations and net inscriptions. The CPNs models encompass three groups: structural, declarations and inscriptions. The declarations and inscriptions of CPNs models are performed through extensions of the Standard ML (as CPN ML). On the other hand, the structure of a CPN model consists basically on a marked graph with places and transitions. More information about CPN ML is found in [MLC96].
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
276 2.4.2.
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al. Hierarchical CPN
The basic idea of Hierarchical CPN (HCPN) [JKW07] [KCJ98] is to allow the modeler to construct hierarchical structures, represented by high-level transitions, called substitution transitions. This means that one can model a large CPN by relating smaller CPNs to each other, in a well-defined way. At one level, it is possible to give a simple description of the modeled activity without having to consider internal details about how it is carried out. At another level, it is possible to specify the more detailed behavior. The model that is represented by the substitution transitions is named subpage, and the higher model, which has substitution transitions, is the page. These pages are connected to each other by input places and some output places called input and output socket places, respectively.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
2.5.
Embedded Systems
The advances of embedded systems have been providing more and more humancomputer interaction, the so called ubiquitous computing. Embedded devices have been so integrated into everyday objects and human activities such as in cars and in telecommunication equipments that it is difficult to note the embedded device within them. A system is said to be embedded when it performs one or a few dedicated tasks, and its environment interactions is continuous through sensors and actuators [Mar03]. The sensors are responsible for collecting information about the embedded system environment and the actuators controls such environment. Such kind of systems only stops working if it is powered down. An embedded system is a special-purpose computer system that has some hard project restrictions such as size, performance, cost, power and others. Following the success of ubiquitous computing for office and control flow applications, embedded systems are considered to be the most important application area of information technology during the coming years. Due to this fact, the term named post-PC era was created, in which the standard-PCs will not be anymore the dominant kind of hardware. Embedded systems have to be dependable in the sense that some devices are responsible for controlling safety-critical systems such as nuclear power plants, cars, trains and airplanes. In order to evaluate embedded system efficiency, the following metrics have been adopted: • Energy and Runtime efficiency: Embedded devices deal with restrict amount of resources and in order to increase their battery life time, the energy consumption should be reduced by the adoption of the smallest clock frequencies and supply voltages that respect their time constraints. • Code − size : The code-size should be as small as possible for especially systems on a chip (SoCs), in which the integrated memory should be used very efficiently. • W eight : The low weight is an essential argument for buying mobile devices. • Cost : Low cost is an extremely crucial issue on the market. Another important characteristic related to embedded system is that their application should be completely dedicated to the device. For an efficient system, an unused memory
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Estimating Energy Consumption and Execution Time of Embedded System...
277
should not be present. Moreover, the cost to fix a code is very high after the embedded system been in a production phase (see the maintenance cost shown in Figure 1). Furthermore, there are two kinds of time constraint. A time constraint is called hard if not meeting that constraint should result in a catastrophe, and all others are soft. Embedded systems are called hybrid if they include analog and digital parts within it. The analog system adopts continuous signal values in continuous time, and digital ones use discrete signal values in discrete time. This system are said to be reactive systems in sense that it is always waiting for some input.
3.
Related Works
This section shows a summary of the relevant related works. First of all, it is performed a general overview of performance evaluation. After, the related works have been divided into three main sections: (i) Hardware simulation-related-works, in which the hardware behavior has been reproduced; (ii) Software simulation-related-works, in which their focus is to simulate the software control flow and its influence to the power and performance; and (iii) Hybrid approach which combine some points related to both hardware and software simulation techniques.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
3.1.
General Overview of Energy Consumption and Performance Evaluation
Herzog [Her02] shows the importance of using Formal Methods (FM) to the Performance Evaluation (PE) process. The main goal of such work is to reduce the mutual reservations between both areas, formal specification techniques and performance evaluation. For them, FMs may find their way into a new and very attractive area of applications and some fundamental problems of PE may be overcome. Thus, methodological steps were proposed. Figure 4 shows a typical scenario in which the environment generates requests, the so called workload, to the system, where:
Figure 4. The System with its Environment, Requirements and Constraints.
(i) The workload represents the sum of all needed and desired activities and services. (ii) The system consists of one or more components trying to satisfy these requests. (iii) The system is considered optimized if the system fulfills all requirements concerning Quality of Service (QoS) as well as all technical and economic constraints.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
278
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al.
In addition, such approach presented an overview of performance evaluation methodology and Figure 5 depicts its steps. It is important to stress that such methodology can also be applied to consider energy consumption evaluation. The first step shown by the methodology present in Figure 5 is to identify the problem and to perform the analysis requirements. In order to identify any problem and/or to perform requirement analysis, some workload characterization and system parameters are needed. After that, two totally different approaches are started, experiments monitoring a real system (measurements) and modeling technique for workload/system behavior studies. Both are followed by analysis steps adopting statistic, stochastic and simulation methods. Thus, the validation is started and, finally, system structures and operating modes are synthesized.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 5. Performance Evaluation Methodology.
3.2.
Hardware Simulation-related-Works
Hardware components such as CPUs or memories provide structural resources, software components provide pure functionality, and some components, such as I/O controllers, provide functionality bundled with resources. Thus, in this chapter, it has been considered as hardware all components related to it (e.g.: busses, I/O controllers). In order to model the hardware operations through microcontroller descriptions, simulation tools have been developed by some approaches. In these simulation techniques, energy consumption models have been built considering either mathematic models of circuit (a lower level) or higher description level such as Register Transfer Level (RTL). PowerMill [HZDS95] is an example of such transistor level tool that reproduces the current and power behavior in VLSI circuits. In such tool, it is possible to simulate deep-submicron CMOS (Complementary Metal-Oxide-Semiconductor) circuits, including sophisticated circuitries such as exclusive-or gates and sense-amplifiers. Another example is the QuickPower [men08], a simulator that considers the logic abstraction level through circuit simulations. In addition, another tool named SimplePower [YVKI00][IKV01] was developed to provide the energy consumed in the memory system and on-chip buses using analytical energy
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Estimating Energy Consumption and Execution Time of Embedded System...
279
models. The PowerTimer toolset [BDB+ 03] is another simulator developed to be used in early-stage microarchitecture-level power-performance analysis of microprocessors. In such approach, energy functions in conjunction with any given cycle-accurate microarchitectural simulator were used. The energy functions model the power consumption of primitive and hierarchically composed building blocks of structures such as pipeline stage latches, queues, buffers and component read/write multiplexers, local clock buffers, register files, and cache array macros. That methodology adopted analytical equations obtained from empirical circuit-level simulation experiments in order to perform the energy consumption estimates. A framework named Wattch [BTM00] was proposed for analyzing and optimizing microprocessor power dissipation at the architecture-level. This approach is considered as a complement to existing lower-level tools, because it allows architects to explore and cull the design space in early design phases.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
SimplePower, Wattch and PowerTimer are engines that consider the abstraction architectural level and adopt a RTL model of the desired architectural in order to estimate the power consumption. Even good results have been obtained by the adoption of such techniques, the low abstraction level adopted demanded an enormous computational effort restricting the applicability for real world applications. Thus, such methodologies have been performed in small code applications. Another drawback of the low level approaches is the need of detailed hardware descriptions. Another work [HH08] adopted a modified version of the Sim-outorder simulator from the SimpleScalar suite [ALE02] [Sim08] in order to investigate some techniques for improving the performance of memory hierarchies for embedded systems. Sim-outorder is an execution driven, cycle-accurate, out-of-order simulator. The adopted methodology considers precise models for the memory hierarchy and for the memory bus, and four different processors with different levels of instruction level parallelism and complexity. A static performance evaluation methodology was proposed in [RJ03] to support early, architecture-level design space exploration for component-based embedded systems. This approach evaluates the system performance based on a scenario. For this, it focuses on an interactive definition of evaluation scenarios through incremental refinement of a functional specification to identify control flow paths corresponding to typical case behaviors. Thus, they considered that the energy consumption through instructions does not have a considerable variation. Instead, to them, the energy consumption has a stronger relationship with the code control flow (e.g.: loops) than the specific characteristics of the set of instructions. Another work [BM08] adopted the SimpleScalar [BA97] architecture simulator in order to extend the cache at the circuit level to allow power and performance trade-offs to be managed. This research divided the power consumption into two components, active power and leakage power. They defined active power as the power consumed by switching parts of the digital circuits, and the leakage power as the power consumed by the transistors when they are off. The variation effects of power supply voltage, threshold voltage and of channel length in the leakage power were performed. Their conclusions were that the channel length impacts more in the leakage power consumption.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
280
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
3.3.
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al.
Software Simulation-related-Works
The energy consumption of a microprocessor is directly correspondent to the software in execution. Thus, a lower energy consumption has become an essential challenge for optimizing the embedded system applications. In general, the energy consumption of a software is described considering the instruction set of the processor under study. The instruction can consume energy basically by two mechanisms: (i)during the instruction execution, a sequence of internal processor states is generated and the state transitions results in the hardware energy consumption pattern, named Instruction Base Cost [LFTM97]; (ii) due to the instruction operands, the instruction can perform register changes and memory access that implies in a dynamic energy consumption. Furthermore, some factors can increase its base cost energy consumption through register transitions and memory access. The register numbers, register values, immediate values of the instructions, operand address and the operand values are examples of those factors [NKN02]. Although these factors has some influence, a mean value in the energy consumption can be obtained to each instruction. Over the last years, many approaches have been developed to deal with the estimation of software execution time and energy consumption in embedded systems. Tiware et al. [TMW94] developed an instruction level simulation mechanism that quantifies the energy cost of individual instruction. This approach divides the code in basic blocks, which define a contiguous section with exactly one entry and exit points. Thus, it is possible to get the energy cost of a program after multiplying each base cost block by the number of times it was executed. The main limitation of this approach is that it will not work for programs with larger execution times since the ammeter may not show a stable reading. An approach for power-aware code exploration, through an analysis mechanism based on Coloured Petri Net (CPN), is presented in [JNM+ 06]. In that approach, a methodology for stochastic modeling of 8051 based microcontroller instruction set is demonstrated. The presented method allows setting probabilities on conditional instructions for representing complex application scenarios. The main drawback of that method is the model complexity, and as a direct consequence, a higher runtime evaluation is required. Another drawback was the adoption of a generic engine for the CPN models evaluation. These restrictions does not allow to evaluate a real life complex application or even reasonable size programs and, also, only Assembly codes were considered. Another approach related to energy consumption estimation is based on functional decomposition [LSJM01]. In that method, the power consumption of each functional block is computed from a set of consumption rules. These rules are represented as mathematical functions obtained from several measurements of different codes and configuration parameter values, which are extracted from the code. Thus, the energy consumption is obtained by adding the consumption of all blocks. This work has been extended [SLJM04], so that a tool to estimate the power and energy consumption related to C programs and assembly code was proposed. This work does not provide means for structural and behavioral property analysis and verification. Another paper [KCNL08] presents an energy consumption modeling technique for embedded systems based on a microcontroller, in which the number of cycles instead of the number of executed instructions is considered and it computes the energy by a polynomial expression. In order to obtain such expressions, the software tasks that run on the embed-
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Estimating Energy Consumption and Execution Time of Embedded System...
281
ded system were profiled, and their characteristics were analyzed. The type of executed assembly instructions, as well as the number of accesses to the memory and the analog-todigital converter, is the required information for the derivation of the proposed model. An appropriate instrumentation setup was developed for measuring and modeling the energy consumption in the corresponding digital circuits. This work adopted analytical models that may require so many simplifications and assumptions that may turn the results not so accurate. Muttreja et al. [MRRJ07] presented a methodology to speed up simulation-based software-performance / energy estimation adopting macromodeling. However, this methodology is only applicable to data that follows the same distribution as the data used to train the model. Thus, this restriction reduces the applicability of that methodology. An adaptation of the instruction-level power estimation model to soft-core processors implemented in FPGAs is presented in [dHAW+ 07]. In order to validate their methodology, the Nios II softcore processor was adopted. In such approach the inter-instruction costs (cost correspondent to the transition from one kind of instruction to another) and pipeline stalls were not modeled directly, instead of that, a correction factor was adopted.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
3.4.
Hybrid Approach
A framework that combines hybrid simulation, cache simulation and online trace-driven replay techniques to accurately predict performance of programmable elements in embedded environments was proposed in [GKK+ 08]. A simulator, called HySim, combines a target architecture specific ISS (Instruction Set Simulator) execution with native code execution on the simulation host for achieving high simulation speed. For this, a similar methodology is adopted where an entire application is compiled through the target compiler to produce a target specific binary (the input of their framework). Another work [SBVR08] presents an hybrid method that solves performance issues by combining the advantages of simulation-based and analytical approaches with the objective of gaining simulation runtime speed without remarkable loss of accuracy. The methodology is based on the generation of SystemC code out of the original C code and back-annotation of statically determined cycle information into the generated code. One drawback of that methodology is the difficulty to find corresponding parts of the binary code in the C source code if the compiler optimizes or changes the structure of the binary code too much. Thus, such approach does not work well for some processors and/or compilers.
4.
Methodology
This section depicts the proposed methodology for building embedded critical software. Next, the proposed framework in order to estimate the energy consumption and performance of embedded system applications is presented. Afterwards, the characterization process that has been adopted to obtain the energy consumption and execution time values of an ARM7based microcontroller instruction set is demonstrated.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
282
4.1.
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al.
MEMBROS
This section briefly introduces the proposed Methodology for Embedded Critical Software Construction (MEMBROS). Figure 6 depicts the core activities of MEMBROS methodology, which organizes the activities in three groups: (i) Requirements Validation; (ii) Energy Consumption and Performance Evaluation; and (iii) Software Synthesis. Although this chapter focuses is related to the energy consumption and execution time estimates, an overview of the whole methodology is important in order to show that this chapter subject is engaged with other works. Initially, the activities regarding requirement validation are performed. After carrying out the requirement analysis, the system requirements are modeled using a set of SysML (Systems Modeling Language) diagrams (SDs)[Sys07] that represents the functionalities of the embedded software to be developed. The SDs provide the designer an intuitive language for modeling the requirements without knowing the details of the internal formalism, which is utilized in further activities for reasoning about quantitative/qualitative properties. Since timing and energy constraints are of utmost importance in the systems of interest, the SDs are annotated with timing and energy consumption information (e.g.: initial estimates) using MARTE (Modeling and Analysis of Real-Time and Embedded Systems) [MAR07]. Next, the annotated SDs are automatically mapped into time Petri net (TPN) models in order to lay down a mathematical basis for analysis and verification of properties (e.g.: absence of deadlock conditions between requirements). This activity also concerns to obtain best and worst-case execution times and the respective energy consumptions, in such a way that the requirements are also evaluated whether timing and energy constraints can be met. More details in [CMC+ 08].
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
MEMBROS Requirement Analysis Creation of SysML Diagrams Assigning Information of Energy Consumption and Execution Time to the Diagrams using MARTE Mapping AD into an ETPN Analysis and Verification Evaluation [Inconsistent diagrams] [Inconsistent requirements]
Embedded Software Development Stringent Constraints Specification
Code Analysis Annotated Source Code Compilation
Architecture Characterization Process
Mapping binary code into CPN
Basic CPN Blocks
Simulation Comparison with Requirement Evaluation Results [Inconsistent code]
[requirements ok]
[Inconsistent requirements]
Scheduling Modeling [check properties]
Analysis and Verification [properties not found]
Scheduling
[properties ok]
[schedule not found]
Code Generation Validation [inconsistent [inconsistent behaviour] constraints]
Deployment Requirements Validation
Energy Consumption and Performance Evaluation
Figure 6. Methodology activity diagram. Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Software Synthesis
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Estimating Energy Consumption and Execution Time of Embedded System...
283
Afterwards, the embedded software development process is started taking into account the results obtained in previous activities. Once a first source code release is available, the designer analyzes the code in order to assign probability values to conditional and iterative structures. It is important to state that with such values it is possible to simulate different software scenarios just changing the probability instruction annotations. Furthermore, the compiled code with probability annotations allows designers to evaluate in the context of time and energy consumption, in such a way that these costs may be estimated before the whole system (hardware prototype) is available. For that, the compiled code is automatically translated into a Coloured Petri net (CPN) model [JKW07] in order to provide a basis for the stochastic simulation of the embedded software. An architecture characterization activity is also considered to permit the construction of a library of basic CPN blocks [CMA+ 08b], which provides the foundation for the automatic generation of CPN stochastic models (see Section 4.2.). From the CPN model (generated by the composition of basic blocks), a stochastic simulation of the compiled code is carried out considering the characteristics of the target platform. If the simulation results are in agreement with the requirements, the software synthesis is performed. Software synthesis activities are concerned with the stringent constraints (e.g.: time and energy), and, in the general sense, it is composed of two subgroups of activities: (i) tasks’ handling; and (ii) code generation. Tasks’ handling is responsible for tasks’ scheduling, resource management, and inter-task communication, whereas code generation deals with the static generation of the final source code, which includes a customized runtime support, namely, dispatcher. It is important to state that the concept of task is similar to process, in the sense that it is a concurrent unit activated during system runtime. For the following activities, it is assumed that the embedded software has been implemented as a set of concurrent hard real-time tasks. Initially, the task timing information as well as the information regarding the hardware energy consumption are computed through the energy consumption and performance evaluation activities. Next, the designer defines the specification of the system stringent constraints, which consists of a set of concurrent tasks with their respective constraints, behavioral descriptions, information related to the hardware platform (e.g.: voltage/frequency and energy consumption) as well as the energy constraint. Afterwards, the specification is translated into an internal model able to represent concurrent activities, timing information, inter-task relations, such as precedence and mutual exclusion, as well as energy constraints. The adopted internal model is a time Petri net extension (TPNE), labeled with energy consumption values and code annotations. After generating the internal model (TPNE), the designer may firstly choose to perform property analysis/verification or carry out the scheduling activity. This chapter adopts a pre-runtime scheduling approach in order to find out a feasible schedule that satisfies timing, inter-task and energy constraints. Next, the feasible schedule is adopted as an input to the automatic code generation mechanism, such that a tailored code is obtained with the respective runtime control, namely, dispatcher. Finally, the application is validated on a Dynamic Voltage Scaling (DVS) platform in order to check the system behavior as well as the respective constraints. Once the system is validated, it can be deployed to the real environment. The basic goal of DVS is to adjust the processor’ operating voltage at run-time to the minimum level in order to reduce the energy consumption considering the application time constraints.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
284
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
4.2.
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al.
Energy Consumption and Performance Evaluation Framework
The model evaluation concerns to execution time and energy consumption estimates. This process aims to help designers to identify the application blocks that need to be optimized and, moreover, it helps them to decide which code parts should be transformed into hardware components. The proposed framework takes into account Assembly and C codes (see Figure 7) labeled with probabilities assigned to conditional instructions in order to specify the system scenarios as well as parameters for the stop criteria. The assembly or C codes are provided as an input to the assembler or compiler that generates two outputs: the Binary Code (machine code) and the Listing file (file in which the probabilities and the stop criteria parameters are captured). The listing file is an output file of compilers and it is adopted to help designers in the debug process (e.g.: to identify the compilation issues). After that, the Binary-CPN Compiler reads these two files (generated by the Assembler or by the Compiler) and, also, the Basic CPN Models, and generates two CPN Models to be analyzed. The CPN-Optimized is the model used to estimate the energy consumption and execution time and, the other model is adopted to validate the optimized one. These CPN models are represented by the basic models and can be read by CPN Tools and/or by the CPN Simulator in order to generate the estimate results. The CPN Simulator is a tool that evaluates the proposed CPN models in order to compute the energy consumption and execution time. This tool has been conceived as an alternative to CPN Tools, since CPN Tools’ simulation mechanism is quite time consuming when analyzing large models because it is a general purpose evaluation environment. Moreover, an automatic CPN Generator receives the processor characterization tables in order to create the Instruction-CPN Models for the ARM7 (and others) processors. The execution times and the energy consumption values could be obtained from datasheets, characterization processes, measuring and so on. 4.2.1.
Characterization Process
In order to obtain the energy consumption and execution time values of a microcontroller instruction set, it may be necessary to adopt some measurement techniques in case such values cannot be obtained from manuals and datasheets. It is essential to present processor particularities to perform a characterization process. Hence, this section initiates by introducing some points related to the adopted Philips LPC2106 microprocessor. Afterwards, the characterization scheme is detailed. An ARM7-based microcontroller, Philips LPC2106 [man03], has been adopted due to its widespread use in embedded systems as the hardware platform for conducting the case study validation. LPC2106 is a 32-bit microprocessor based on Reduced Instruction Set Computer (RISC) principles. Furthermore, its instruction set and its related decode mechanism are much simpler than those of microprogrammed based on Complex Instruction Set Computers (CISC). As a consequence, high instruction throughput and impressive real-time interrupt response from a small and cost-effective processor core have been performed. Another important characteristic that should be mentioned about LPC2106 microprocessor is the absence of internal cache memory [Fur00]. Instead, a Memory Accelerator Module (MAM) is available. The MAM corresponds to a technique, adopted by LPC2100
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Estimating Energy Consumption and Execution Time of Embedded System...
285
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 7. The proposed Framework.
family microprocessors, that attempts to have the next ARM instruction in its local memory in time for the CPU to execute. Although this mechanism improves the microprocessor performance, it has not been considered throughout the scope of this chapter. The proposed methodology has considered a general approach to evaluate the ARM7-based microprocessors. Thus, the characterization process was performed with the MAM mechanism turned off. However, it is possible to cover this particularity by extending the proposed methodology in order to perform an evaluation before each instruction be executed to reproduce the MAM technique approach. Additionally, it is important to state that the energy consumption depends on the instruction parameters (register values). In other words, if the same instruction is executed with different parameters, the energy consumption and execution time may have small differences. Tiwari et al [TMW94] showed that good estimates can be obtained without considering such issue, in which their experiment results demonstrated that the range of those variation corresponds to less than 5%. Figure 8 depicts a code example adopted to characterize the LPC2106 instruction set, in which an oscilloscope synchronization marker has been adopted to the code analysis. Ten thousands of instruction replications have been performed, as Figure 8 depicts on lines 8 to 12, in order to have a precise characterization of an instruction. Furthermore, Figure 9 depicts such code on the oscilloscope (Agilent DSO03202A), in which the signal voltages can be viewed as a two-dimensional graph. The wave shape of the electrical signal shown in Figure 9 represents the execution of the code in analysis.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
286
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al.
1 2 while(1){ 3 int i; 4 IOSET = IOPIN | 0X00000080; /* oscilloscope synchronization marker */ 5 6 /* code */ 7 __asm{ 8 mov r1, #0 9 mov r1, #0 10 ... (representing 9.996 mov r1, #0) 11 mov r1, #0 12 mov r1, #0 13 } 14 /* end of code */ 15 16 IOCLR = (˜IOPIN) | 0X00000080 /* oscilloscope synchronization marker */ 17 for (i=0; i Conditional transitions are adopted to represent the conditional model (see Section 5.3.). In addiction to the time and energy values, a probability value is associated to these transitions. < ConditionalT ransition cy=“TimeValue” energyCost=“EnergyValue” id=“identification” name=“label” probability=“value”> Stop Criteria parameters The desired execution time (ErrorM axT ime) and energy consumption (ErrorM axEner) errors and the confidence interval (IC) are set up by the following XML.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Estimating Energy Consumption and Execution Time of Embedded System...
299
< conf iguration ErrorMaxEner=“value” ErrorMaxTime=“value” IC=“confidence interval”> 6.4.2.
Simulation Process
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
The simulation starts by reading the Sim file, initializing statistical counters (variables used for storing statistical information about system performance and energy consumption) and setting the stop criteria parameters (confidence interval, desired energy consumption and execution time errors). Next, the simulation clock is set to “0” (variable indicating the current value of simulated time) and the event list is created (list that contains the enabled transitions).
Figure 26. Simulation diagram. Afterwards, the simulation process get into a loop. The simulation loop corresponds to the evaluation of the CPN model until the estimate results take into account the specified confidence degree. The loop is started and the event list is updated by a method that adds the enabled transitions and removes disabled ones. A transition with the smallest time associated is chosen from the event list to be fired. It is important to recover that a transition fire means that an event (instruction or block of instructions) was executed. Always after performing an event activity (a transition fires), the simulation clock, statistical counters and the system state are updated in order to represent the new state of the CPN model evaluation. Moreover, after all transition fires, it should be evaluate if it is the end of the code or not. In case it is the code finish, a method is conceived in order to determine whether the simulation should be finished or not according to the stop criteria evaluation (Section 6.4.3.). After finishing the simulation, the estimate results (e.g.: energy consumption and execution time values) are showed. Figure 26 shows the simulation diagram of the adopted simulation process.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
300 6.4.3.
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al. Stop Criteria Evaluation
A stop criteria evaluation is adopted in order to provide simulation results taking into account specified confidence degree. As a narrow confidence interval has been considered, the simulation process has to be executed several times (runs) to provide the estimates results [Chu04]. The number of runs depends on (among other factors) the specified confidence degree. The initial number of replication runs adopted is specified by the analyzer. Stop Criteria considers: Absolute Precisions for energy consumption and execution time (the designer informs the desired precisions), means and standard deviations. The Absolute Precision, calculated by Equation 2, in which the t critical value is calculated for 1 − α/2 confidence degree and n − 1 degrees of freedom; s is the standard deviation of replication and n is the number of replications in the example. s AbsoluteP recision = t1−α/2,n−1 × √ n
(2)
Afterwards, the desired precisions related to both energy consumption and execution time are compared with the current results. The simulation is finished if these calculated values are smaller than the desired precisions specified. Otherwise, the simulation proceeds by calculating the required number of new simulation runs (replications) through Equation 3 considering the desired precision specified. There are two replication values, one for energy consumption and other for execution time. t1−α/2,n−1 × s i= DesiredPrecision Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
6.5.
2
(3)
Graphical User Interface
The ALUPAS’ Graphical User Interface (GUI) provides a mechanism in which the designer does not interact directly with the internal formalism (CPN). Figure 27 depicts the ALUPAS’ input interface. A simulation process starts when the designer creates a new project and puts the code to be analyzed. Afterwards, the designer inserts the probabilistic values on conditional instructions and sets up the stop criteria evaluation parameters (e.g.: confidence intervals, desired energy consumption and execution time errors). After a successful compilation process, a CPN model is created to be evaluated (simulated) in order to obtain the energy consumption and execution time estimates. ALUPAS can evaluate different control flow scenarios, where the designer just changes the probabilistic annotation values of the conditional instructions. The estimate results are stored and can be compared to other simulation results. Figure 28 shows the ALUPAS’ output interface. The Simulation Results frame depicts the estimated metric results (mean energy consumption and execution time values, and their respective standard deviations and errors). After performing the simulation, it is important to highlight that graphic representations (histogram and box plot) can be plotted. The reader should observe that different simulation results can be plotted into the same plot area (see Figure 28). This fact helps the designer to compare different simulation scenarios.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Estimating Energy Consumption and Execution Time of Embedded System...
301
Figure 27. ALUPAS’ input interface.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Figure 28. ALUPAS’ output interface.
6.6.
Component Integration
ALUPAS has been developed to combine the proposed framework functionalities (Section 4.2.) into a unified environment with a GUI The reader should have in mind that the designer just interacts with the GUI. However, trained designers can view and edit the CPN model that represents the code behavior. Furthermore, the file with all simulation results is available from which the designer can consider to perform other statistical calculations. It is important to highlight that the component integration is quite similar to the proposed framework and, so, it has not been not described in details.
7.
Case Studies
In order to illustrate the practical usability of the proposed methodology, this section presents five experimental results in details. All experiments were performed on an AMD Turion 64X2 1.6GHz, 2Gb RAM, and OS WinXP. The first one exemplifies the proposed methodology by a small code application which was adopted for explaining the process to estimate both energy consumption and execution time of embedded softwares. The second experiment is performed in order to illustrate a runtime comparison between the CPN Simulator, specific engine to evaluate CPN models, and CPN Tools. Experiments adopting a binary search algorithm are considered, in the third study case, to depict that different
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
302
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al.
scenarios can be created by changing the probabilistic annotations for the conditional code instructions. The fourth experiment, the BCNT algorithm, is considered to evaluate the proposed methodology and to illustrate in details the proposed optimized CPN model (model after reduction process). The last experiment, the pulse-oximeter case study, is conducted in order to apply the proposed methodology in a real-world case.
7.1.
Example One
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
This example has been conducted in order to exemplify the proposed methodology. Figure 7.1. depicts a small application code, where the values for the stop criteria evaluation are depicted on the first code line. Note that the confidence degree is set to 95%, the specified precision for energy consumption is 200ηJ, and 20µs is the specified precision for the execution time. The number of runs of each replications is set to 40 times, and the maximum number of replications is 10.000 states (if the simulation is finished by this condition, there is no guarantee that the confidence degree is gotten). It is important to highlight that these values are chosen by the designer from a previous acknowledgement about the code under analysis. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
; AREA ARMex, CODE, READONLY ENTRY main bl proc bx r14 proc stmdb r13!,{r14} bl for bl for mov r1,#0 mov r2,#0 add r1,r1,#1 add r2,r2,#2 ldmia r13!,{r15} for mov r4,#1 b test loop add r4,r4,#1 test cmp r4,#0xa blt loop ; bx r14 END
Figure 29. Annoted Assembly Code. The registers’ values have not been considered in the model, and instead of comparing them, a probabilistic approach is adopted. In this example, there is a loop (lines 16-20) that is executed 10 times, so, the probability is performed using the equation p = 1 − (1/N ). In this case, p = 1 - 1/10 = 0,9 (probability of the conditional instruction blt loop). The probability has been adopted to set prob variable (see Figure 18) in the conditional CPN model. 7.1.1.
Simulation Results
The code depicted in Figure 7.1. was simulated in three different ways: (i) using CPN Tools with the CPN model, (ii) using CPN Tools with the optimized CPN Model, (iii) and adopting the proposed CPN Simulator with the optimized CPN Model. The simulation results using CPN Tools considering both CPN model and optimized model are identical. Thus, the proposed CPN Simulator has been considering only the optimized model.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Estimating Energy Consumption and Execution Time of Embedded System...
303
Table 1 presents both CPN Tools and CPN Simulator simulation results. The reader should observe that we are considering standard deviation and errors for both metrics (energy consumption and execution time). The time standard deviation obtained when CPN Tools was considered was 0, 19µs. The confidence degree adopted was 95% (see header annotation on Figure 7.1.), so that the execution time value (2, 23µs) should be within [2, 10µs; 2, 37µs]. Table 1. Simulation Results of the code on Figure 7.1.. CPN Tools Mean Time: 2,2320 µs Time SD: 0,1905 µs Time Error: 0,1363 µs Mean Energy: 167,4008 ηJ Energy SD: 14,1533 ηJ Energy Error 10,1240 ηJ
CPN Simulator Mean Time: 2,2279 µs Time SD: 0,1444 µs Time Error: 0,1033 µs Mean Energy: 167,2632 ηJ Energy SD: 10,7243 ηJ Energy Error: 7,6717 ηJ
Table 2 compares the simulation results obtained through CPN tools (considering the non-optimized model) and the proposed CPN simulator in order to show that these results are very close. The simulation results provided by the CPN Simulator are similar to those obtained by the CPN Tools, since the differences are smaller than 2%. Table 2 also compares the simulation runtime of both environments CPN Tools and CPN Simulator. The CPN Tools spent 22s, meanwhile the CPN Simulator spent less than 1s. Thus, this simple example showed that the simulation runtime of CPN Simulator were much faster then the CPN Tools.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Table 2. Comparison between simulation results
Execution Time Energy Consumption Runtime
7.2.
CPN Tools CPN Model 2,2320 µs 167,4008 ηJ 22 s
CPN Simulator CPN Optimized 2,2279 µs 167,2632 ηJ 172 ms
Example Two
This experiment has been adopted aiming to show the importance of both CPN reduction process and CPN Simulator. It consists of codes with instructions that only use the ordinary model (see Figure 15). Thus, these codes do not perform branches in the code control flow and the examples were performed with 10, 20, 30, 40, 50, 100, 200, 400 instructions in order to perform the runtime comparison between CPN Tools and CPN Simulator. Table 3 shows a runtime comparison of those simulation models in which different numbers of instructions were taken into account. It is worth stressing that the runtime simulation on CPN tools is quite time consuming when analyzing large models. Table 3 also presents the CPN Simulator runtime which is at least 91 times shorter than the respective time on CPN Tools environment. The column named CPN Tools / CPN
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
304
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al. Table 3. Comparison of the runtime simulation. N Inst. 10 20 30 40 50 100 200 400
CPN Model CPN Tools CPN Simulator 17s 187ms 21s 219ms 30s 234ms 41s 281ms 56s 297ms 1min 51s 515ms 5min 6s 1s 219ms 17min 25s 3s 438ms
CPN Tools / CPN Simulator 91 96 128 146 189 216 251 304
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Simulator shows the ratio between CPN Simulator runtime and CPN Tools runtime. The CPN Simulator has performed much faster simulations than CPN tools. However, it is also important to mention that CPN Simulator performs even better for larger models. For a 20-instruction application, the CPN Simulator was 96 times faster than CPN Tools; and for a 200-instruction application, the CPN Simulator was 251 times faster. Figure 30 depicts the simulation time comparison between CPN Tools and CPN Simulator. The CPN Simulator was much faster than CPN Tools. A possible explanation for the runtime in CPN Tools has not been good enough to evaluate the conceived models (huge models) is the fact that CPN Tools is a generic environment to create, edit and simulate CPN models. Thus, such environment needs to perform a syntax analysis process before being able to simulate. Furthermore, as CPN Simulator is a specific tailored environment for simulating the conceived CPN models, it does not spend time checking models’ syntax. Nevertheless, it is important to stress that the obtained models are syntactically correct and their semantics represent the programs’ control flow.
Figure 30. CPN Tools versus CPN Simulator runtime. In this example, the adopted reduction process clustered all the instructions when it is considered the optimized CPN model. The CPN tools runtime and CPN Simulator runtime were 6s and 187ms, respectively for all of these experiment examples. Thus, the simulation runtime with optimized CPN model (6s) for the 400-instruction application was 174 times
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Estimating Energy Consumption and Execution Time of Embedded System...
305
faster than the simulation with non optimized CPN model (17min 25s).
7.3.
Binary Search Algorithm
A binary search algorithm is a technique for finding a particular value in a sorted list (array) of values. This method starts by selecting the middle element of an array, comparing its value to the target value, and determining if the selected value is greater than, less than, or equal to the target value. The selected element whose value turns out to be higher becomes the new upper bound of the array, and if its value is lower, it becomes the new lower bound. This technique continues iteratively and such algorithm reduces the search by a factor of two each time. This example considers an array with 255 elements. The code was evaluated in three different scenarios: Best Case Execution Time (BCET), Typical Case Execution Time (TCET) and Worst Case Execution Time (WCET). For each scenario, different probabilistic values were associated to the conditional instructions in order to reproduce their respective behavior.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Table 4 shows the results for the typical, the best and the worst scenarios of a binary search algorithm. The lowest energy consumption and execution time values occurred in the best case. On the other hand, the highest consumption and execution time were performed by the worst scenario. The typical scenario results were between the best and worst case. Moreover, the estimated results are quite close to the measured ones performed on the hardware platform. For example, the estimated execution time value for the worst case was 15,3µs and the measured one was 15,2µs.
Table 4. Binary Search results summary Estimated Hardware Case Study Time(µs) Energy(µJ) Time(µs) Energy(µJ) 1. Binary Search(BCET) 2,1 0,12 2,3 0,13 2. Binary Search(WCET) 15,3 0,87 15,2 0,87 3. Binary Search(TCET) 11,7 0,69 12,1 0,69
After the results had been compared (validated) with the hardware platform for an array with 255 elements, another experiments were performed taking into account other arrays with different lengths for estimating the worst and best cases. As it was already explained, the conditional instruction present on line 30 is the one responsible to determine the list length, so that its probability were p = 54 (for 15 elements), p = 56 (for 31 elements), p = 76 (for 63 elements) and so on until p = 10 11 (for 1023 elements). Figure 31 shows the results considering execution time worst case and Figure 32 depicts the results related to energy consumption worst case. As the reader may observe, the results of both energy consumption and execution time worst case increase when the binary search algorithm is performed in higher arrays.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
306
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al.
Figure 31. Binary search results of execution time.
Figure 32. Binary search results of energy consumption.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
7.4.
BCNT Algorithm
The BCNT Algorithm was proposed by Motorola as an integrated part of Power Stone Benchmark. The BCNT adopts a series of operations between two arrays, explores the memory space manipulation, and it also adopts bitwise operations. Table 5 depicts a comparative study between the estimated values and measurements conducted on hardware platform according to the methodology described in [TM08]. The execution time measured on hardware was 96,39 µs and the energy consumption was 5,73 µJ. The estimated time error was 2,27% and the energy error 4,23%.
7.5.
Pulse-Oximeter
This case study is considered in order to apply the proposed methodology for estimating the energy consumption and execution time of a real embedded system application. Pulse-oximeter is a widely-used biomedical device that can be battery operated, therefore its lifetime is of great importance. This electronic device is responsible for non-invasively measurements of the blood oxygen saturation and has been widely used in Center Care Units (CCU). The micro-controller controls the synchronization and amplitude of the led driver, which dispatches non-simultaneous stream pulses to the infrared and red leds (see Figure 33). Both leds generate, respectively, infrared and red radiation pulses through the finger of a patient. A photo-diode detects the radiations level. The micro-controller calculates the related oxygen saturation level based on data received, and shows the result on a display. The pulse-oximeter code was divided into three processes: (i) excitation which is re-
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Estimating Energy Consumption and Execution Time of Embedded System...
307
Table 5. BCNT results summary. Estimated Measured Error
Time (µs) 94,25 96,39 2,27%
Energy (µJ) 5,50 5,73 4,23%
Figure 33. Pulse-oximeter.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
sponsible for dispatching stream pulses to the leds in order to generate radiation pulses; (ii) acquisition, which deals with the data captured from the radiations on the patient’s finger; and (iii) control which is responsible to perform the calculation of oxygen saturation level. For each process, a CPN model was built in order to estimate the respective energy consumption and execution time. Table 6 presents a comparative study between the estimated values and measurements conducted on the hardware platform according to the methodology described in [CM08]. Table 6. Pulse-oximeter result summary Estimated Hardware Error Case Study Time(µs) Energy(µJ) Time(µs) Energy(µJ) Time(%) Energy(%) 1. Excitation 38,48 2,20 38,88 2,25 1,04 2,20 2. Acquisition 86,61 5,16 91,18 5,55 5,28 7,66 3. Control 12410,78 722,54 12745,99 779,46 2,70 7,88
Furthermore, Table 7 presents a runtime comparison of CPN Tools versus CPN Simulator, and Figure 34 also depicts this comparison through a graphic representation. In this graphic, there are three lines representing the simulations: (i) CPN Tools adopting CPN Model as input; (ii) CPN Tools with the CPN Optimized model as input; and (iii) CPN Simulator evaluating the CPN Optimized model. The reader should observe that the first line increases much faster than the others, hence, one may observe that the CPN Simulator has provided results at good accuracy in a much faster runtime than CPN Tools does. no difference was detected to highlight distinction between the two sets of data (obtained by measurement and through simulation). Besides, the p-value (0,063) is greater than commonly chosen α-levels, hence there is no difference between estimated and measured values.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
308
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al. Table 7. CPN Tools runtime x CPN Simulator runtime.
Excitation Acquisition Control
CPN Tools CPN Model CPN Opt 203s 150s 1540s 729s 4203s 1325s
CPN Simulator CPN Opt 22s 71s 151s
Figure 34. Runtime Comparison.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
8.
Conclusion
This chapter presented an approach based on Coloured Petri nets for estimating both embedded software execution time and energy consumption. A methodology which reproduces the code control flow through the composing of the proposed set of basic CPN models that represents the behavior of the instruction set microcontroller was detailed. The desired estimate metrics are obtained by simulating the CPN model. Additionally, this chapter presented CPN reduction rules in order to transform a CPN model into an equivalent simplified model, in which all important characteristics for estimating energy consumption and execution time are preserved. The runtime evaluation adopting this reduction process was, in some cases, 174 times faster than simulations adopting the non-optimized CPN model. Moreover, this chapter also detailed a simulation infrastructure of integrated tools that allows the automatic translation of a compiled code into a CPN Model, such that nonspecialized users do not need to interact directly with the Petri net formalism. Hence, ALUPAS, a unified environment for estimating energy consumption and execution time, has been developed in order to provide such functionalities in which system design complexity is considerably reduced and inconsistencies related to non-functional requirements are detected earlier without great difficulty. ALUPAS adopts a stochastic discreet event simulation, in which complex systems and different control flow scenarios can be easily evaluated. In order to represent these scenarios, the designer just changes the probabilistic annotation values associated to the condi-
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Estimating Energy Consumption and Execution Time of Embedded System...
309
tional instructions. The estimate results are stored and can be compared with other simulation results. Furthermore, being able to estimate the performance and energy consumption of a system is important because if such requirements are not satisfied, the system designers can make changes in a very early stage of the design, thereby saving both time and money. Hence, ALUPAS can provide important insights to the designer about the battery lifetime as well as parts of the application that needs optimization. It is worth mentioning that the estimated values obtained via simulation on CPN Tools and the simulation through CPN Simulator are quite close. However, the simulation on CPN Simulator was 300 times faster, in some cases, than simulations on CPN Tools when considering the optimized CPN Model. For sake of fairness, the reader should also bear in mind that CPN Tools is a general purpose environment, and it provides many other functionalities than the CPN Simulator does. The presented case studies clearly show that the proposed methodology and the framework have provided meaningful results with small errors using a real-world device of center care units, called pulse-oximeter, and other customized examples. The estimates obtained from the model are 93% close to the respective measures obtained from the real hardware platform. It is also important to highlight that pieces of codes that are either energy or timing consuming were also identified. Moreover, the simulations provide accurate results with much smaller computational effort than measurements on the hardware platform. It is important to stress that the proposed methodology is not restrict to ARM7 based microprocessors. Thus, as future directions, the proposed methodology of this chapter may be extended to study other processor families. The proposed CPN Generator is a tool developed in order to help the designer to extend the proposed methodology to other processor families by the automatic creation of the basic CPN models. Moreover, the proposed methodology can be extended to cover pipeline, which is a technique adopted by processors to allow overlapping execution of multiple instructions at the same time. Similarly, another extension can be to consider not only simple task operations, but also to estimate the energy consumption and execution time of multi-processors, in which the Coloured Petri Net has a precise formal semantic that can easily represent parallel systems. Moreover, the CPN Simulator tool was developed considering this future work and it is able to evaluate parallel and concurrent systems. Another possible future work is related to estimate energy consumption and execution time to more complex and large systems such as data centers. Again, the formal semantic provided by the adoption of Petri net can support this more complex kind of system.
References [ALE02]
Todd Austin, Eric Larson, and Dan Ernst. Simplescalar: An infrastructure for computer system modeling. Computer, 35(2):59–67, 2002.
[BA97]
Doug Burger and Todd M. Austin. The simplescalar tool set, version 2.0. SIGARCH Comput. Archit. News, 25(3):13–25, 1997.
[BDB+ 03]
Brooks, David, Bose, Pradip, Srinivasan, Vijayalakshmi, Gschwind, and Michael K. New methodology for early-stage, microarchitecture-level power-
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
310
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
performance analysis of microprocessors. IBM Journal of Research and Development, 2003. [BKY98]
Frank Burns, Albert Koelmans, and Alexandre Yakovlev. Analysing superscalar processor architectures with coloured petri nets. International Journal on Software Tools for Technology Transfer, 2:182–191, 1998.
[BKY00]
Frank Burns, Albert Koelmans, and Alexandre Yakovlev. Wcet analysis of superscalar processors using simulationwith coloured petri nets. Real-Time Syst., 18(2-3):275–288, 2000.
[BL04]
R. Barreto and R. Lima. A novel approach for off-line multiprocessor scheduling in embedded hard real-time systems. Design Methods And Applications For Distributed Embedded Systems, 2004.
[BM08]
Mahmoud Bennaser and Csaba Andras Moritz. Power and performance tradeoffs with process variation resilient adaptive cache architectures. In SBCCI ’08: Proceedings of the 21st annual symposium on Integrated circuits and system design, pages 123–128, New York, NY, USA, 2008. ACM.
[Bol06]
G. Bolch. Queueing networks and Markov chains: modeling and performance evaluation with computer science applications. John Wiley & Sons, Inc, 2nd edition, 2006.
[BTM00]
David Brooks, Vivek Tiwari, and Margaret Martonosi. Wattch: a framework for architectural-level power analysis and optimizations. SIGARCH Comput. Archit. News, 28(2):83–94, 2000.
[Chu04]
C.A. Chung. Simulation Modeling Handbook: A Practical Approach. CRC Press, 2004.
[CM08]
G. Callou and P. Maciel. grac/alupas, 2008.
Alupas software.
http://www.cin.ufpe.br/∼
[CMA+ 08a] G. Callou, P. Maciel, E. Andrade, B. Nogueira, and E. Tavares. A coloured petri net based approach for estimating execution time and energy consumption in embedded systems. In SBCCI ’08: Proceedings of the 21st annual symposium on Integrated circuits and system design, pages 134–139, New York, NY, USA, 2008. ACM. [CMA+ 08b] G. Callou, P. Maciel, E. Andrade, B. Nogueira, and E. Tavares. Estimation of energy consumption and execution time in early phases of design lifecycle: an application to biomedical systems. Electronics Letters, 44(23):1343–1344, November 2008. [CMA+ 08c] G. Callou, P. Maciel, E. Andrade, B. Nogueira, E. Tavares, and M. Oliveira. A formal approach for estimating embedded system execution time and energy consumption. In PATMOS ’08: Proceedings of the 18st International Workshop on Power and Timing Modeling, Optimization and Simulation, 2008.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Estimating Energy Consumption and Execution Time of Embedded System...
311
[CMC+ 08]
E. Carneiro, P. Maciel, G. Callou, T. Tavares, and B. Nogueira. Mapping sysml state machine diagram to time petri net for analysis and verification of embedded real-time systems with energy constraints. In ENICS ’08: Proceedings of the 2008 International Conference on Advances in Electronics and Micro-electronics, pages 1–6, Washington, DC, USA, 2008. IEEE Computer Society.
[cpn07]
Cpn tools version 2.2.0. 2007.
[CTM08]
G. Callou, E. Tavares, and P. Maciel. http://www.modcs.org/?page id=15, 2008.
http://wiki.daimi.au.dk/cpntools/cpntools.wiki,
Modcs
-
tools.
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
[dHAW+ 07] J.A. de Holanda, J. Assumpcao, D.F. Wolf, E. Marques, and J.M.P. Cardoso. On adapting power estimation models for embedded soft-core processors. Industrial Embedded Systems, 2007. SIES ’07. International Symposium on, pages 345–348, July 2007. [ET93]
B. Efron and R. Tibshirani. An Introduction to the Bootstrap. CChapman and Hall, 1993.
[Fur00]
S. Furber. Arm System-On-Chip Architecture. Addison-Wesley, 2000.
[GKK+ 08]
Lei Gao, Kingshuk Karuri, Stefan Kraemer, Rainer Leupers, Gerd Ascheid, and Heinrich Meyr. Multiprocessor performance estimation using hybrid simulation. In DAC ’08: Proceedings of the 45th annual conference on Design automation, pages 325–330, New York, NY, USA, 2008. ACM.
[Har08]
Robert Harper. Programming in Standard ML. Carnegie Mellon University, 2008.
[Her02]
Ulrich Herzog. Formal methods for performance evaluation. pages 1–37, 2002.
[HH08]
Giancarlo C. Heck and Roberto A. Hexsel. The performance of pollution control victim cache for embedded systems. In SBCCI ’08: Proceedings of the 21st annual symposium on Integrated circuits and system design, pages 46–51, New York, NY, USA, 2008. ACM.
[HZDS95]
Charlie X. Huang, Bill Zhang, An-Chang Deng, and Burkhard Swirski. The design and implementation of powermill. In ISLPED ’95: Proceedings of the 1995 international symposium on Low power design, pages 105–110, New York, NY, USA, 1995. ACM.
[IKV01]
M. Irwin, M. Kandemir, and N. Vijaykrishnan. Simplepower: A cycleaccurate energy simulator. IEEE CS Technical Committee on Computer Architecture Newsletter, 2001.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
312
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al.
[Jai91]
R. Jain. The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modeling. Wiley Interscience, 1991.
[Jen94]
Kurt Jensen. An introduction to the theoretical aspects of coloured petri nets. In A Decade of Concurrency, Reflections and Perspectives, REX School/Symposium, pages 230–272, London, UK, 1994. Springer-Verlag.
[Jen95]
Kurt Jensen. Coloured Petri nets: basic concepts, analysis methods and practical use, vol. 2. Springer-Verlag, London, UK, 1995.
[JKW07]
Kurt Jensen, Lars Michael Kristensen, and Lisa Wells. Coloured petri nets and cpn tools for modelling and validation of concurrent systems. Int. J. Softw. Tools Technol. Transf., 9(3):213–254, 2007.
[JNM+ 06]
Meuse N. O. Junior, Silvino Neto, Paulo Maciel, Ricardo Lima, Angelo Ribeiro, Raimundo Barreto, Eduardo Tavares, and Frederico Braga. Analyzing software performance and energy consumption of embedded systems by probabilistic modeling: An approach based on coloured petri nets. Petri Nets and Other Models of Concurrency - ICATPN 2006, 4024/2006:261–281, 2006.
[Joh06]
L. John. Performance Evaluation and Benchmarking. CRC Press, 2006.
[KCJ98]
Lars M. Kristensen, Soren Christensen, and Kurt Jensen. The practitioner’s guide to coloured petri nets. International Journal on Software Tools for Technology Transfer, 2:98–132, 1998.
[KCNL08]
V. Konstantakos, A. Chatzigeorgiou, S. Nikolaidis, and T. Laopoulos. Energy consumption estimation in embedded systems. IEEE Transactions on Instrumentation and Measurement, 57(3):797–804, 2008.
[kei08]
Keil software version 3.33. https://www.keil.com, 2008.
[Lee02]
E. A. Lee. Embedded software, volume 56. 2002.
[LFTM97]
Mike Tien-Chien Lee, Masahiro Fujita, Vivek Tiwari, and Sharad Malik. Power analysis and minimization techniques for embedded dsp software. IEEE Trans. Very Large Scale Integr. Syst., 5(1):123–135, 1997.
[LK99]
Averill M. Law and David M. Kelton. Simulation Modeling and Analysis. McGraw-Hill Higher Education, 1999.
[LSJM01]
J. Laurent, E. Senn, N. Julien, and E. Martin. High Level Energy Estimation for DSP Systems. Proc. Int. Workshop on Power And Timing Modeling, Optimization and Simulation PATMOS, pages 311–316, 2001.
[Luc71]
Henry Lucas, Jr. Performance evaluation and monitoring. ACM Comput. Surv., 3(3):79–91, 1971.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
Estimating Energy Consumption and Execution Time of Embedded System...
313
[man03]
Lpc2106/2105/2104 user manual, philips electronics. http://www.nxp.com/acrobat download/usermanuals/UM LPC2106 2105 2104 1.pdf, 2003.
[Mar03]
Peter Marwedel. Embedded System Design. 2003.
[MAR07]
OMG MARTE. Profile for Modeling and Analysis of Real-Time and Embedded systems (MARTE), Beta1. 2007.
[men08]
Mentor. http://www.mentor.com/, 2008.
[MLC96]
P.R.M. Maciel, R.D. Lins, and P.R.F. Cunha. Introduc¸a˜ o a` s Redes de Petri e Aplicac¸o˜ es. X Escola de Computac¸a˜ o, Campinas, SP, 1996.
[MRRJ07]
A. Muttreja, A. Raghunathan, S. Ravi, and N. Jha. Automated energy/performance macromodeling of embedded software. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 26:2229– 2256, 2007.
[Mur89]
T. Murata. Petri nets: Properties, analysis and applications. Proceedings of the IEEE, 77(4):541–580, 1989.
[NKN02]
S. Nikolaidis, N. Kavvadias, and P. Neofotistos. Instruction level power models for embedded processors. Technical report, IST-2000-30093/EASY Project, Deliv. 21, Dec. 2002.
[NN02]
N.K.S. Nikolaidis and P. Neofotistos. Instruction-level Power Measurement Methodology. Electronics Lab, Physics Dept., Aristotle University of Thessaloniki, Greece, March, 2002.
[RJ98]
J.T. Russell and M.F. Jacome. Software power estimation and optimization for high performance, 32-bit embedded processors. In ICCD ’98: Proceedings of the International Conference on Computer Design, page 328, Washington, DC, USA, 1998. IEEE Computer Society.
[RJ03]
Jeffry T. Russell and Margarida F. Jacome. Architecture-level performance evaluation of component-based embedded systems. In DAC ’03: Proceedings of the 40th conference on Design automation, pages 396–401, New York, NY, USA, 2003. ACM.
[SBVR08]
J¨urgen Schnerr, Oliver Bringmann, Alexander Viehl, and Wolfgang Rosenstiel. High-performance timing simulation of embedded software. In DAC ’08: Proceedings of the 45th annual conference on Design automation, pages 290–295, New York, NY, USA, 2008. ACM.
[Sim08]
Simplescalar llc. http://www.simplescalar.com/, 2008.
[SLJM04]
E. Senn, J. Laurent, N. Julien, and E. Martin. SoftExplorer: estimation, characterization and optimization of the power and energy consumption at the algorithmic level. Proc of PATMOS Conference, pages 342–351, 2004.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
314
Gustavo Callou, Paulo Maciel, Ermeson Andrade et al.
[Sys07]
OMG SysML. Systems Modeling Language (SysML) Specification final report. Object Management Group, 2007.
[Tav06]
Eduardo Tavares. A time petri net based approach for software synthesis in hard real-time embedded systems with multiple processors. Master’s thesis, Centro de Informtica, Universidade Federal de Pernambuco, 2006.
[TM08]
E. Tavares and P. Maciel. eagt/tools/, 2008.
[TMS+ 07]
E. Tavares, P. Maciel, B. Silva, M. Oliveira, and R. Rodrigues. Modelling and scheduling hard real-time biomedical systems with timing and energy constraints. Electronics Letters, 43(19):1015–1017, 2007.
[TMSO08]
E. Tavares, P. Maciel, B. Silva, and M.N. Oliveira. Hard real-time tasks’ scheduling considering voltage scaling, precedence and exclusion relations. Information Processing Letters, 2008.
[TMW94]
V. Tiwari, S. Malik, and A. Wolfe. Power Analysis of Embedded Software: A First Step Towards Software Power Minimization. Readings in Hardware/Software Co-Design, 1994.
[VLM+ 03]
R.A. Vinter, W. Lisa, L.H. Michael, et al. CPN Tools for Editing, Simulating, and Analysing Coloured Petri Net. Proceedings of Applications and Theory of Petri Nets, pages 23–27, 2003.
[Wel02]
Lisa Wells. Performance Analysis using Coloured Petri Nets. PhD thesis, Department of Computer Science, University of Aarhus, 2002.
[Wil01]
T. Wilmshurtz. An Introduction to the Design of Small-scale Embedded Systems. 2001.
[Yea98]
Gary K. Yeap. Practical low power digital VLSI design. Kluwer Academic Publishers, Norwell, MA, USA, 1998.
[YVKI00]
W. Ye, N. Vijaykrishnan, M. Kandemir, and M. J. Irwin. The design and use of simplepower: A cycleaccurate energy estimation tool. Proceedings of Design Automation Conference, 2000.
Amalghma tool.
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
http://www.cin.ufpe.br/∼
INDEX
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
A abstraction, xi, 267, 269, 273, 275, 278, 279, 287 accountability, 70 accounting, 94, 204 achievement, 231, 233, 238 acid, 75, 76, 80, 83, 92 actuators, 276 adaptation, 281 adjustment, 238, 250 aerosols, 137, 195, 199 Afghanistan, 204 Africa, 178, 206 age, 94, 190 agriculture, 73, 88, 184, 188, 233 air pollutants, 80 Algeria, 218 algorithm, xi, 226, 230, 232, 234, 241, 243, 244, 245, 247, 248, 249, 254, 301, 302, 305 alternative energy, 70, 82, 260 alternatives, viii, 69, 71, 82, 85, 86, 205, 223 ambiguity, 102 amplitude, 41, 43, 44, 56, 57, 116, 119, 120, 121, 123, 124, 127, 306 animal husbandry, 184 annotation, 300, 303, 308 annual rate, 187 architects, 279 Aristotle, 313 Asia, 97, 168, 178, 206, 256, 260 assets, 191 assimilation, 14, 46, 48, 102, 103, 105, 106 assumptions, viii, x, 69, 71, 86, 103, 134, 156, 183, 184, 190, 194, 235, 271, 272, 281 atoms, 82 attacks, 204 attitudes, 184 Australia, 148, 203, 206, 211, 212, 224 Austria, 83 authors, 192, 193, 264 automation, 311, 313
availability, viii, ix, 69, 71, 77, 85, 86, 88, 93, 94, 133, 134, 140, 141, 142, 147, 156, 168, 177, 178, 179, 180, 181 averaging, 212
B background, 12, 103, 104, 105, 106, 270 background information, 270 banking, 198 barriers, vii, x, 72, 203 basic needs, 95 batteries, 79, 81, 87 beams, 134, 135, 136, 137, 141, 142, 156, 177 behavior, vii, x, xi, 1, 37, 59, 225, 227, 235, 238, 260, 267, 269, 272, 276, 277, 278, 283, 301, 305, 308 benign, 75, 190, 216 bias, 30, 61, 106 binding, 76, 77, 217, 275 biodiesel, 76, 77, 81 biodiversity, 197 bioenergy, 87 biological responses, 3, 63 biomass, 71, 76, 82, 86, 87, 204 biomass materials, 76 blocks, 269, 280, 283, 284 blood, 95, 306 boilers, 77 Botswana, 91 branching, 292 Brazil, 81, 225, 226, 228, 239, 267 breeding, 233 broadband, 133 buffer, 191 building blocks, 279 burn, 78 burning, 70, 83, 85, 86, 196, 217, 221, 260 by-products, 76, 87, 93
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
316
Index
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
C Canada, 76, 206, 211, 212, 218, 224 carbon, viii, 69, 70, 75, 77, 81, 83, 85, 86, 87, 88, 95, 96, 195, 196, 200, 201, 205, 217, 218, 219, 220, 222, 223 carbon dioxide, viii, x, 25, 26, 27, 28, 30, 63, 69, 70, 71, 75, 77, 81, 82, 83, 85, 86, 87, 88, 97, 183, 184, 185, 188, 189, 190, 192, 193, 195, 196, 197, 198, 199, 201, 205, 217, 218, 219, 220, 221, 222 carrier, 80, 87 case study, 227, 302, 306 cast, 56 catastrophes, 73, 194, 195, 200 catchments, 74 cattle, 233 cell, 79, 80, 87 cellulose, 87 Central Asia, 97 Central Europe, 141, 146, 150 chaotic behavior, 44 chimera, 192 China, 26, 75, 79, 88, 106, 108, 113, 128, 129, 206, 210, 211, 212, 217 circulation, vii, 1, 11, 14, 41, 45, 64, 103, 106, 110, 192, 195, 199 City, 182 classes, 230, 233 classification, 181, 234, 270, 271 clean energy, 75, 96, 220, 222, 223 climate change, vii, viii, 70, 83, 87, 92, 97, 193, 195, 200, 205, 216, 220, 226 climatic factors, 235, 236 closure, 14 clusters, 113 CMC, 282, 311 coal, viii, x, 69, 72, 75, 76, 82, 85, 94, 193, 203, 204, 205, 206, 207, 208, 209, 210, 211, 213, 214, 215, 216, 220, 222 code generation, 283 codes, xii, 268, 280, 284, 294, 303, 309 coke, 88, 95 combined effect, 196 combustion, 80, 94, 96, 217, 218 commodity, 213, 224 communication, 88, 283 communication technologies, 88 community, viii, xi, 13, 69, 71, 80, 86, 103, 267 competition, 74, 191 competitiveness, 71, 97 compilation, 284, 300 compiler, 281, 284, 286, 287, 294, 295, 296 complement, 279 complexity, 3, 178, 279, 280, 308 components, 5, 18, 19, 30, 31, 32, 60, 61, 70, 172, 177, 193, 277, 278, 279, 284 composition, xi, 78, 125, 267, 283, 287 compounds, 78
computation, 244, 269 computing, 276 concentration, 2, 11, 32, 63, 64, 70, 79, 83, 86, 185, 188, 192, 193, 195, 197 condensation, viii, 99, 100 confidence, 272, 292, 295, 297, 298, 299, 300, 302, 303 confidence interval, 272, 295, 297, 298, 299, 300 configuration, 102, 231, 232, 234, 269, 271, 280, 299 conflict, 194, 197 consensus, x, 193, 197, 203 consent, 183 conservation, vii, viii, 70, 73, 87, 179, 264 construction, 73, 84, 85, 283 consumers, 216 consumption, vii, x, xi, 71, 72, 76, 80, 81, 88, 90, 91, 92, 97, 183, 184, 190, 191, 192, 197, 198, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 223, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 237, 238, 239, 241, 242, 243, 244, 245, 247, 248, 249, 251, 252, 253, 254, 255, 268, 269, 270, 279, 280, 281, 283, 287, 296, 305, 310, 312 continuity, 4, 263, 264 contour, 128, 129 control, 78, 104, 105, 129, 141, 190, 193, 196, 201, 263, 268, 276, 277, 279, 283, 286, 287, 288, 290, 291, 300, 303, 304, 307, 308, 311 convergence, 3, 12, 97 conversion, 70, 75, 76, 78, 79 cooking, 76 cooling, vii, ix, 1, 2, 3, 63, 99, 100, 121, 123, 129, 194, 195, 198 copper, 79 coral reefs, 192 corporations, 226 correlation, 105, 106, 111, 119, 120, 121, 207, 209, 212, 214, 215, 226, 227, 230, 231, 235, 238 correlation analysis, 227, 235 correlation coefficient, 119 correlations, x, 104, 121, 214, 225, 227, 229, 230, 235, 238, 239 cost saving, 78 costs, 73, 74, 77, 78, 83, 90, 91, 92, 93, 95, 97, 197, 228, 260, 269, 271, 281, 283 coupling, 33 covering, 263 CPU, xi, 241, 242, 243, 245, 246, 247, 249, 251, 285, 286, 294 credit, 90, 97 critical value, 300 crops, 77, 81, 82, 94 crude oil, 204, 207, 209, 213 CST, 79 customers, 83, 84, 228 cycles, 242, 255, 280 cyclones, vii, viii, 1, 66, 99, 107, 129, 130
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Index
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
D data analysis, 239 data set, 150, 173, 226 database, 95, 105, 175 death, 187, 194, 197 debt, 88 decay, 77, 106, 107, 108, 110, 116, 129 decision makers, 78, 227 decision making, 226, 227, 229, 231 decisions, xi, 238, 239, 267, 271, 293 decomposition, 104, 280 defence, 197, 198 deficiency, 178 deficit, x, 77, 81, 183, 195 definition, xi, 9, 113, 134, 141, 259, 261, 262, 273, 274, 275, 279, 288 deflation, 198 deforestation, 71, 191, 194 degradation, 92, 192, 194 delivery, 269 Denmark, 75 density, 4, 5, 6, 13, 24, 33, 59, 79, 101, 102, 105, 123, 261 Department of Energy, 12 dependent variable, 168 deposits, 72, 199, 206, 207, 216 depression, 15 deprivation, 174 designers, 269, 283, 284, 287, 301, 309 destiny, 196 destruction, 25, 194 developed countries, 75, 83, 88, 217 developed nations, 85 developing countries, 75, 83, 217, 260 deviation, 5, 17, 23, 41, 303 differentiation, 93 diffusion, 4, 12, 104, 156, 157, 181 diffusivities, 24 diffusivity, 12 directives, 81 dispersion, 128, 129 distribution, ix, 17, 18, 32, 48, 50, 56, 82, 83, 84, 99, 106, 112, 113, 115, 116, 117, 123, 126, 128, 133, 148, 149, 152, 153, 156, 159, 160, 167, 179, 180, 181, 238, 251, 252, 281, 291 divergence, 3 division, 92 doors, 221 duration, ix, 99, 100, 111, 117, 121, 122, 124, 129, 133, 142, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 155, 156, 166, 174, 177, 178, 179, 180, 181, 242, 260, 261
E earth, 70, 71, 75, 86, 181, 217, 220
317
East China Sea, 26, 128 East Sea, 48 Easter, 194 ecology, 184 economic activity, 185, 191, 192 economic development, viii, 71, 86, 95, 198, 199, 201 economic growth, 88, 190, 192, 193 economic incentives, 73 economic problem, 190 economic resources, 72 economics, 72, 191 ecosystem, 191, 199, 220 Education, 89, 312 effluent, 73 effluents, 73 Egypt, 148 elasticity, 78 electricity, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 87, 93, 97, 205, 211, 216, 220, 221, 227, 228, 230, 238, 259, 260 electromagnetic, 133 electron, 80 emission, 81, 87, 193, 196, 200, 205, 217, 218, 222 employees, 230 employment, 71, 191, 239 encouragement, viii, 69, 87 end-users, 81 energy, vii, viii, ix, x, xi, xii, 1, 2, 3, 5, 21, 60, 69, 70, 71, 72, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 90, 92, 93, 94, 96, 97, 99, 100, 102, 108, 110, 117, 129, 133, 134, 147, 150, 161, 176, 177, 178, 179, 180, 192, 193, 194, 195, 203, 204, 205, 206, 210, 211, 212, 213, 216, 217, 218, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 232, 235, 236, 238, 239, 241, 242, 243, 244, 245, 248, 251, 252, 254, 255, 256, 259, 260, 261, 263, 264, 265, 267, 268, 269, 270, 271, 272, 276, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314 energy constraint, 268, 282, 283, 311 energy consumption, x, xi, xii, 74, 85, 88, 204, 210, 211, 212, 225, 226, 227, 230, 232, 235, 236, 238, 239, 241, 242, 243, 244, 248, 251, 252, 254, 255, 267, 268, 269, 270, 271, 272, 276, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 290, 291, 292, 293, 294, 295, 297, 298, 299, 300, 301, 302, 303, 305, 306, 307, 308, 309, 310, 312, 313 energy efficiency, vii, viii, 70, 87, 96, 205, 221, 222 energy recovery, 78 energy supply, x, 71, 72, 80, 203, 204, 205, 210, 216, 220, 222 entropy, 192 environment, vii, viii, x, xi, 69, 70, 71, 72, 73, 74, 78, 83, 86, 87, 88, 92, 94, 110, 113, 157, 183, 184, 187, 190, 191, 192, 193, 194, 197, 198, 226, 239,
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
318
Index
260, 269, 271, 275, 276, 277, 283, 284, 293, 294, 301, 303, 304, 308, 309 environmental awareness, 73 environmental change, 74, 193, 195 environmental conditions, 94, 123, 124 environmental degradation, 192, 194, 195, 199 environmental effects, 260 environmental impact, vii, x, 71, 72, 88, 93, 95, 183, 185, 192, 194, 195, 197, 198, 205, 221 environmental protection, 73, 192, 205, 216 Environmental Protection Agency, 77, 117, 119, 121, 122, 124 environmental regulations, 74 equity, 73, 85 error detection, 269 ester, 76 estimating, xii, 53, 59, 100, 102, 110, 227, 228, 267, 268, 269, 286, 293, 305, 306, 308, 310 ethanol, 76, 77, 81, 82 Europe, 75, 76, 77, 81, 150, 168, 174, 180, 190, 194, 200, 206, 220, 256, 260 European Commission, 76, 81 European Union, 76, 77, 81, 83, 88 evaporation, 5, 144, 150 evolution, 2, 8, 36, 37, 39, 81, 121, 131, 192 excitation, 306 exclusion, 119, 283, 314 execution, xi, xii, 119, 241, 242, 243, 244, 245, 247, 248, 249, 250, 251, 252, 255, 267, 269, 271, 272, 279, 280, 281, 282, 284, 285, 286, 287, 288, 290, 291, 292, 293, 294, 296, 297, 298, 299, 300, 301, 302, 303, 305, 306, 307, 308, 309, 310 exercise, 74 experimental design, 14 exploitation, 73, 198, 199 exports, 76, 230 externalities, 93, 96 extinction, 135, 180 extraction, 74, 216, 218 extrapolation, 185, 187
F fabrication, 78 failure, 190, 193 fairness, 309 family, 285 famine, 187 farmers, 90 farms, 75, 90, 260 fat, 76 feedback, 64, 200 feet, 77, 211, 215, 216 fermentation, 76 fertility, 194 filters, 135 finance, 97 financial crisis, 216
financial resources, 73 fires, 87, 195, 196, 200, 299 firms, 75 fish, 90 fishing, 194 fission, 82, 83, 88 fitness, 231, 232, 234 flight, 78 fluctuations, 91, 175 fluid, xi, 78, 259, 260, 261 fluorescence, 78 focusing, 3, 63, 97 food, 75, 81, 95 food production, 75 forecasting, x, xi, 203, 213, 226, 228 foreign exchange, viii, 70, 71 fossil, vii, viii, x, 69, 70, 71, 76, 77, 80, 81, 82, 83, 85, 86, 87, 96, 97, 184, 196, 199, 203, 204, 205, 206, 207, 210, 211, 213, 214, 215, 216, 217, 218, 220, 221, 222, 223, 260 France, 83, 211, 212 freedom, 300 frequency distribution, 182 freshwater, 74 friction, 4, 103 fruits, 230 fuel, x, 70, 71, 72, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 87, 89, 90, 91, 92, 93, 94, 96, 97, 199, 203, 204, 205, 206, 207, 210, 211, 213, 214, 215, 216, 218, 220, 221, 222, 223, 260 fuel efficiency, 221 functional programming, 275 funding, 85, 93, 95 funds, 85, 86 fusion, 193
G gases, viii, 69, 80, 88, 94, 95, 196, 199, 217 gasification, 89 gasoline, 76, 79, 205, 221 GDP, ix, x, 95, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 194, 195, 197, 199, 204, 211, 212 GDP per capita, 212 generation, vii, viii, 1, 69, 71, 75, 78, 79, 80, 82, 83, 84, 86, 88, 90, 93, 211, 228, 230, 232, 239, 260, 281, 283 geology, 184 Germany, 75, 76, 82, 211, 212 global climate change, 193, 194, 197 global competition, 191 global demand, 195 global economy, 198 global trade, 96 GNP, 230 goals, 76, 81, 82, 271 gold, 213
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Index government, 74, 76, 82, 85, 93, 205, 217, 220, 221, 222, 230 grains, 81, 230, 260 grants, 73 graph, 144, 275, 285 gravity, 4, 59 Great Depression, 190 Greece, 180, 181, 313 Greeks, 78 green buildings, 221, 223 greenhouse gases, 70, 86, 196, 197, 217 grid services, 94 grids, 33, 45, 56 groundwater, 74 grouping, 146 groups, 81, 153, 244, 270, 275, 282 growth, ix, x, 59, 60, 64, 72, 73, 74, 75, 76, 81, 83, 85, 92, 95, 97, 183, 184, 185, 186, 187, 188, 189, 191, 192, 194, 196, 197, 198, 204, 207, 210, 211, 212, 219, 260 growth rate, ix, x, 59, 183, 184, 185, 186, 187, 188, 189, 191, 192, 197 guidelines, 73, 228
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
H habitat, 194 Haiti, 191 harm, 72, 74 hazards, 78, 88, 195 health, 71, 74, 78, 86, 88, 93, 95, 179, 268 health care, 268 heat, viii, 3, 4, 5, 8, 13, 17, 33, 35, 57, 69, 71, 75, 78, 79, 80, 83, 84, 86, 87, 88, 94, 99, 100, 102, 108, 110, 116, 118, 129, 195, 217, 220, 221, 268 heat release, viii, 99, 100 heating, 75, 78, 87, 89, 94, 221 heavy oil, 206 height, 23, 26, 28, 33, 39, 40, 41, 42, 43, 44, 56, 61, 102, 114, 115, 116, 126, 128, 129, 134, 137 hemisphere, 149, 156, 160, 161, 163 hip, 198 histogram, 300 HM Treasury, 201 homeowners, 71, 75 Honda, 66, 67 host, 281 House, 130, 200 human animal, 200 human development, 85 human welfare, 191 humidity, 13, 35, 113, 114, 115, 126, 128, 129, 144, 227, 230, 235 hunting, 194 hurricanes, 112, 119, 197 hybrid, xi, 12, 103, 205, 221, 223, 226, 230, 231, 277, 281, 311 hydroelectric power, 85
319
hydrogen, 80, 81 hydrothermal system, 228 hypothesis, 173
I ideal, 137, 143, 184, 230 identification, 298 ideology, 86 illumination, 181, 230 images, 14, 15 implementation, 70, 73, 74, 75, 86, 93, 182, 210, 235, 275, 311 imports, 71, 76, 81 incentives, 70, 74, 75 incidence, 140, 142 inclusion, 62, 226, 264 income, 71, 86, 239 India, 75, 88, 206, 210, 212, 217 indication, 18 Indonesia, 149, 150, 196, 200 industrial revolution, 71, 79, 184, 198 industrialisation, 73, 74 industry, viii, 69, 75, 76, 78, 79, 80, 81, 82, 83, 85, 87, 221 inelastic, 206 inferences, xi, 226, 227, 234, 239 infinite, 97 inflation, 204, 213 infrastructure, 71, 74, 81, 82, 83, 90, 91, 94, 216, 308, 309 initiation, 37 insight, 260 instability, 3, 20, 22, 25, 64 institutions, 77 instruction, 269, 279, 280, 281, 283, 284, 285, 286, 287, 288, 289, 290, 291, 294, 296, 299, 302, 305, 308 insulation, 79, 88 insurance, 195 integration, 2, 13, 15, 16, 19, 21, 23, 24, 29, 31, 33, 35, 37, 40, 41, 44, 46, 49, 56, 57, 70, 76, 106, 163, 262, 301 intelligence, x, 225, 227, 231 intelligent systems, 226 interaction, 59, 63, 64, 100, 101, 102, 115, 116, 121, 231, 276 interactions, 60, 100, 276 interdependence, 174 interface, 3, 6, 59, 100, 230, 231, 294, 300, 301 interference, 228 interrelations, ix, 133, 174, 178 interval, 33, 35, 61, 113, 184, 185, 187, 250, 291, 299 inventors, 79 inversion, 105, 145, 175 investment, 81, 82, 84, 90, 91, 93, 95, 96, 192, 205, 213, 216, 217, 220
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
320
Index
investors, 93, 216 Iran, 207 Iraq, 204 Ireland, 75, 83, 218, 219 iron, 85, 230 isotherms, 18, 20 Italy, 220 iteration, 7, 10
M
J Japan, 1, 13, 15, 35, 48, 60, 65, 66, 67, 99, 103, 106, 129, 131, 132, 211, 212, 221 jobs, 95, 244 Jordan, 97
K Kenya, 91, 259 Keynes, 185, 190, 198, 200 killing, 95 knots, 117 Korea, 129, 212
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
L labour, 92 lakes, 75, 196, 201 land, viii, 69, 71, 74, 75, 77, 86, 221, 227 landfills, 77 language, 275, 282, 294, 295, 296 laptop, 79 laws, 264 leakage, 279 LED, 177 legend, 79 legislation, 77 leisure, 190 liberalisation, 93, 96 life cycle, 268, 269 life span, 80, 116 lifestyle, 184 lifetime, xii, 90, 91, 102, 242, 267, 268, 306, 309 light scattering, 156 light transmittance, 141, 147 lignin, 87 likelihood, 93, 233 limitation, 280 line, 18, 20, 34, 36, 44, 84, 116, 151, 172, 212, 213, 295, 302, 305, 307 links, 150 living standards, 74, 190 lobbying, 81 love, 97 lower prices, 84 LPG, 89
maintenance, viii, 69, 70, 73, 87, 90, 91, 92, 94, 95, 141, 191, 277 management, 73, 74, 88, 93, 194, 238, 242, 256 mandates, 76 manipulation, 306 manufacturer, 294 manufacturing, 90, 220, 271 manure, 76 mapping, 13, 244, 256 market, x, xi, 72, 79, 80, 81, 82, 90, 92, 96, 203, 204, 210, 213, 221, 226, 227, 228, 230, 238, 239, 260, 265, 276 market penetration, 90 market share, 80 marketing, 227, 239 markets, 72, 80, 82, 84, 97, 213, 260 Markov chain, 230, 235, 310 Mars, 191 matrix, 93, 103, 104, 105, 235, 236, 237 measurement, 141, 168, 176, 180, 270, 271, 284, 287, 288, 307 measures, vii, viii, 70, 73, 76, 77, 78, 81, 87, 101, 192, 197, 230, 238, 271, 309 mechanical properties, 78 median, 185 Mediterranean, 143, 149, 150 memory, 276, 278, 279, 280, 281, 284, 285, 294, 306 mentor, 313 meridian, 142, 159, 161 metals, 78 Mexico, 76, 182 microelectronics, 268 micrometer, 78 microwaves, 101, 268 Middle East, 73, 75, 206, 210 military, 268 mining, 83 missions, 217 mixing, 8, 20, 53, 100, 116, 201 mobile device, 276 mobile phone, 268 mobility, 191 model, vii, x, xi, xii, 1, 2, 3, 4, 8, 11, 12, 13, 14, 33, 49, 59, 60, 61, 63, 66, 102, 103, 104, 105, 106, 153, 155, 157, 158, 181, 225, 226, 235, 242, 244, 245, 246, 267, 271, 272, 273, 275, 276, 278, 279, 280, 281, 283, 284, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 307, 308, 309 model specification, 297 modeling, xi, 14, 156, 181, 238, 240, 267, 268, 271, 272, 273, 278, 280, 281, 282, 287, 309, 310, 312 models, ix, x, xi, xii, 2, 8, 12, 13, 14, 33, 108, 133, 153, 154, 160, 167, 180, 195, 203, 221, 226, 235, 267, 268, 269, 270, 271, 272, 273, 275, 278, 279,
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Index 280, 281, 282, 284, 286, 287, 288, 296, 297, 298, 301, 303, 304, 308, 309, 311, 313 modules, 228, 229, 275 moisture, vii, viii, 1, 13, 99, 100, 108 momentum, 3, 4, 13, 17, 43, 59, 70, 100, 103, 221, 261 money, 197, 269, 309 Moon, 67, 157, 181 morning, 139, 150, 151, 152, 153, 154, 155, 156, 168, 175 motion, 13, 16, 75, 196, 200, 260, 262 mountains, 139 movement, 78, 147, 260 MPI, 100, 108, 109, 110, 113 MRI, 11, 12, 16, 17, 18, 19, 23, 25, 29, 30, 31, 32, 53, 60, 65, 66, 103, 104, 105, 106, 130 multimedia, 241 multiplication, 261, 274 mutation, 232 mutation rate, 232
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
N nanometers, 78 nanoparticles, 78 nanotechnology, 78 nation, 71, 72, 74, 86, 95 National Research Council, 192, 200 Native Americans, 78 native species, 194 natural disasters, vii, 1, 88 natural gas, 72, 76, 78, 80, 204, 207, 208, 210, 211, 220 natural growth rate, 186 natural resources, 76, 85, 86, 88 negative relation, 207 Netherlands, 90 network, 83, 84, 93, 227, 233, 235, 239 New South Wales, 224 next generation, 222 NGOs, 89 Nile, 75 Nile River, 75 nitrogen, 80 nitrogen oxides, 80 noise, 260 North Africa, 73 North America, 76, 178, 194, 206 North Sea, 218 novel materials, 78 nuclear weapons, 220 nutrients, 30
O objectives, 70, 77, 81, 198
321
observations, 2, 14, 15, 28, 29, 45, 46, 47, 48, 53, 58, 101, 102, 106, 108, 195, 196 obstruction, 139, 161 oceans, viii, 14, 45, 99, 100, 102, 192, 201, 224 oil, x, 72, 76, 77, 80, 81, 82, 83, 85, 97, 193, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 222, 223, 230 oil sands, 206 oil spill, 83 operating system, 256 operator, 105 Operators, 232 optimization, xii, 227, 231, 235, 268, 270, 309, 313 orbit, 134, 217 order, vii, viii, x, xi, 12, 13, 23, 29, 41, 53, 64, 69, 70, 76, 78, 88, 183, 184, 185, 191, 192, 193, 194, 197, 198, 205, 216, 217, 220, 221, 222, 225, 226, 228, 230, 231, 232, 234, 235, 237, 238, 239, 241, 242, 243, 249, 269, 270, 271, 276, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 291, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 305, 306, 307, 308, 309 organ, 282 organic compounds, 76 organic matter, 77 orientation, 90, 161, 177 oscillation, 19, 23, 121, 124, 129 ownership, 74 ox, 76 oxygen, 76, 218, 306, 307 ozone, 137
P Pacific, ix, 31, 38, 57, 99, 100, 102, 105, 106, 112, 113, 114, 116, 117, 119, 121, 124, 129, 131, 195, 201 parallelism, 279 parameter, ix, 6, 10, 33, 100, 140, 280 parameters, ix, 11, 102, 105, 111, 117, 133, 141, 158, 159, 160, 168, 180, 184, 185, 186, 206, 228, 231, 232, 235, 251, 278, 284, 285, 295, 297, 298, 299, 300 parents, 237 particles, 156, 181, 182 partnership, 13, 82 path analysis, 244 peat, 85, 196, 200 penalties, 228 per capita income, 196 permeability, 72 permit, 283 Philippines, 106, 108, 113, 119, 129 photosynthesis, 87 physical properties, 78 physics, 2, 13, 63, 100, 261, 264 phytoplankton, 64 planning, 70, 85, 86, 220, 227
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
322
Index
plants, viii, 69, 73, 80, 81, 82, 83, 92, 260 plutonium, 82 police, 95 political leaders, 198 politics, 86 pollutants, 75, 81 pollution, ix, 71, 73, 74, 80, 86, 88, 92, 133, 137, 139, 144, 145, 197, 199, 221, 260, 311 polymers, 78 poor, 29, 90, 91, 97, 227 population, vii, ix, x, 73, 74, 82, 83, 88, 95, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 217, 226, 232, 234 population growth, x, 83, 88, 183, 190, 191, 192, 194, 197, 199, 200 ports, 290, 292 Portugal, 75 positive correlation, 120, 206, 208 positive feedback, 201 positive relation, 209 positive relationship, 209 poverty, 73, 74, 191, 197, 198 power, viii, x, xi, 69, 70, 71, 72, 75, 78, 79, 80, 82, 83, 84, 85, 86, 87, 88, 91, 93, 94, 96, 97, 111, 116, 117, 188, 190, 193, 194, 205, 216, 218, 220, 221, 222, 225, 226, 227, 228, 231, 232, 233, 234, 235, 237, 238, 239, 241, 242, 244, 245, 246, 247, 255, 256, 259, 260, 261, 262, 264, 265, 268, 269, 271, 276, 277, 278, 279, 280, 281, 309, 310, 311, 313, 314 power plants, 71, 79, 82, 83, 85, 221, 276 precipitation, 3, 5, 29, 33, 53 predictability, 268 prediction, 2, 13, 45, 50, 56, 57, 59, 61, 62, 64, 65, 66, 108, 174, 180, 190, 204, 213, 222 preference, 84 pressure, 3, 4, 5, 6, 13, 15, 17, 26, 27, 28, 35, 36, 37, 38, 39, 41, 49, 57, 59, 63, 88, 97, 101, 102, 106, 108, 109, 110, 111, 112, 113, 116, 121, 129, 263 price stability, 94 prices, x, 70, 72, 77, 81, 85, 91, 97, 203, 204, 205, 207, 208, 213, 214, 215, 217, 220, 222, 224 private investment, 82 private sector, 193 probability, 150, 151, 152, 153, 154, 178, 198, 232, 233, 234, 235, 236, 237, 283, 291, 292, 295, 298, 302, 305 probability distribution, 235 process gas, 89 producers, 72, 76, 96 production, vii, 1, 6, 24, 25, 31, 59, 64, 72, 73, 75, 76, 77, 78, 81, 82, 85, 87, 88, 90, 91, 93, 94, 95, 96, 185, 190, 192, 204, 206, 207, 216, 220, 268, 277 production costs, 94, 95 production technology, 72 productivity, 71, 221 profit, 150 program, 168, 269, 272, 280, 292, 294
programming, 269, 273, 275, 294 programming languages, 269, 273 proliferation, 82 prosperity, 95 prototype, 271, 283 public markets, 96 public service, 230 pumps, 90, 91, 92 purification, 76 purity, 76
Q QoS, 277 quality control, 105, 134 quality of life, 88 query, 232 queuing theory, 271
R race, 198 radar, 53 radiation, vii, ix, 3, 5, 8, 9, 10, 11, 12, 14, 17, 29, 32, 33, 53, 70, 100, 101, 133, 134, 176, 177, 180, 306, 307 radioactive waste, 220 radius, 2, 3, 4, 12, 17, 35, 36, 37, 39, 44, 64 rain, 75, 83, 92, 150, 174 rainfall, vii, 1, 2, 230, 235, 236 range, 8, 71, 73, 78, 82, 87, 90, 102, 138, 143, 145, 147, 148, 149, 150, 151, 152, 156, 168, 173, 176, 185, 188, 204, 220, 230, 233, 234, 238, 252, 285 raw materials, 97 REA, 302 reading, 280, 299 real terms, 195, 213, 214, 215 real time, xi, 241 reality, 80, 143, 156, 178 reason, xi, 77, 176, 259 reasoning, 239, 282 reciprocity, 149 recognition, 86 recovery, 88 recycling, 74, 78, 88 redevelopment, 72 redistribution, 153, 155, 156 reflection, ix, 133 refugees, 200 region, x, xi, 4, 24, 70, 101, 105, 112, 113, 117, 119, 121, 147, 149, 193, 195, 196, 206, 218, 225, 226, 227, 238, 239 regression, 11, 230, 234 regression method, 230 regulation, 239 relationship, 38, 44, 100, 101, 102, 106, 109, 110, 111, 116, 119, 121, 122, 129, 207, 212, 215, 279
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Index reliability, 70, 84, 268, 271 renewable energy, vii, viii, x, 69, 70, 71, 72, 76, 77, 80, 84, 85, 95, 203, 205, 221, 222, 223 repair, 70, 92, 270 replication, 295, 300 representativeness, 105 reproduction, 2 Requirements, 277, 282 reserves, x, 74, 192, 193, 199, 203, 204, 205, 206, 207, 208, 209, 210, 222, 223 resistance, 13 resolution, 12, 13, 14, 29, 33, 50, 61, 63, 102, 110, 113, 123, 286 resource management, 283 resources, viii, x, 69, 70, 71, 72, 73, 74, 80, 84, 85, 86, 87, 88, 190, 192, 194, 195, 196, 197, 198, 203, 204, 205, 206, 207, 210, 216, 222, 223, 244, 276, 278 responsiveness, xi, 267 retail, 220 returns, 3, 216 rings, 163 risk, 71, 192, 228, 239 river basins, 73 roughness, 59, 60, 61 routines, 92 rural areas, viii, 69, 71, 86 rural development, 81 Russia, 97, 183, 206, 207, 211, 217
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
S safety, 71, 78, 88, 156, 205, 220, 222 sales, 228 salinity, ix, 4, 5, 9, 12, 29, 35, 51, 52, 53, 99, 100, 104, 105, 111, 129 salt, 90 sampling, 78 satellite, 2, 14, 32, 45, 46, 47, 50, 58, 101, 102, 105, 108, 129 saturation, 306, 307 Saudi Arabia, 206, 207, 211 savings, vii, viii, 70, 71, 86, 87, 88, 96, 221, 244 scaling, xi, 241, 242, 256, 314 scarcity, 73, 95, 191 scattering, ix, 133, 156, 157, 160, 177, 180 scheduling, xi, 241, 242, 243, 244, 246, 247, 248, 249, 250, 251, 255, 256, 257, 283, 310, 314 scientific method, 228 SCP, ix, 99 sea-level, 4, 26, 27, 101, 110, 111, 121, 137, 190, 195, 197, 200 search, 70, 73, 230, 232, 233, 301, 305, 306 security, viii, 70, 71, 81, 82, 86, 205 selecting, 87, 305 semantics, 304 sensitivity, 37, 46, 134, 176, 177 sensors, 101, 102, 276
323
separation, 70, 76 sewage, 73 shade, 147 shape, 92, 285 shares, 80 sharing, 198 shear, 6, 20, 22, 24, 25, 64, 115, 116 shock, 223 shortage, 73, 97 Siberia, 196 signals, 97 significance level, 119, 121, 122 signs, 119 silver, 205, 222, 223 simulation, 2, 14, 16, 18, 25, 29, 31, 45, 92, 182, 251, 252, 253, 269, 272, 273, 277, 278, 279, 280, 281, 283, 284, 286, 288, 292, 293, 295, 297, 299, 300, 301, 302, 303, 304, 305, 307, 308, 309, 311, 313 Singapore, 178, 182 skin, 8, 101 Slovakia, 133, 140, 180, 181 smog, 75, 139, 144 smoke, 144 SMS, 273 social development, 95 software, xii, 162, 182, 267, 268, 269, 270, 277, 278, 280, 281, 282, 283, 286, 287, 294, 296, 308, 310, 312, 313, 314 soil, 186, 194, 197 soil erosion, 194, 197 solar collectors, 142, 179 solid waste, 77, 94 South Africa, 148 South China Sea, 106, 113 South Korea, 211 South Pacific, 256 soybean, 76 space, 60, 80, 133, 156, 165, 263, 279, 306 Spain, 75 species, 194, 196, 197 specific heat, 5, 35, 102 specific knowledge, 231 spectrum, 60, 73, 134, 138, 156, 176, 177, 178, 180 speed, vii, 1, 2, 3, 12, 15, 16, 17, 26, 27, 28, 30, 32, 34, 35, 36, 37, 44, 53, 59, 61, 63, 64, 90, 91, 101, 117, 128, 191, 243, 251, 252, 281 stability, 13, 93, 242 standard deviation, 40, 41, 124, 292, 293, 300, 303 standard of living, 70 standardization, 142 standards, 72, 73, 83, 157, 160, 167, 174, 187, 221 steel, 230 stimulus, 85 stochastic model, 280, 283 stock, 96 stock markets, 96 storage, viii, 69, 76, 91, 92, 94, 218, 219, 220 storms, 145
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
324
Index
stoves, 221 strategies, 72, 86, 97, 193, 238, 243 stratification, ix, 12, 22, 23, 31, 32, 37, 99, 101 strength, 105 stress, 4, 6, 35, 53, 61, 100, 184, 191, 230, 270, 273, 275, 278, 288, 304, 309 structural changes, 33 structuring, 275 subgroups, 283 subsistence, 190 substitutes, 220 substitution, 263, 276 subtraction, 274 Sudan, 90, 91, 92, 97 sugar, 230 summer, 106, 124, 139, 152, 168, 169 Sun, 130, 139 supernatural, 197 suppliers, x, xi, 84, 225, 226, 238, 239 supply, x, 71, 72, 74, 76, 81, 84, 90, 91, 93, 95, 97, 100, 191, 203, 205, 206, 210, 211, 216, 220, 221, 222, 223, 226, 242, 245, 251, 276, 279 supply chain, 97 suppression, vii, 1, 33, 39, 116 surface area, 78 survival, 95 sustainability, 76, 88, 93, 184, 191, 205, 223 sustainable development, vii, viii, x, 70, 87, 88, 89, 97, 183, 184, 192, 199 Sweden, 81 switching, 279 symbols, 18, 19, 21, 23, 31, 47, 48, 112, 114, 149, 150 synchronization, 285, 286, 306 synthesis, 283, 314 synthetic fuels, 81
time constraints, 268, 269, 276, 283 time series, 29, 30, 40, 41, 44, 49, 57, 58, 62, 120, 124, 227, 228 timing, xi, xii, 267, 268, 272, 282, 283, 309, 313, 314 tones, 217 total energy, xi, 76, 88, 204, 211, 212, 231, 241, 244, 245, 247, 248, 249, 251, 252, 253, 254, 255, 262 total revenue, 230 Toyota, 221 tracking, 79, 138 tracks, 26, 48, 57, 61, 96, 123, 124, 125 trade, 76, 88, 95, 96, 269 trade-off, 82, 92 trading, 71, 86 traffic, 144 trajectory, 15, 92 transactions, 228 transformation, 104, 230, 233 transistor, 278 transition, 38, 44, 97, 191, 235, 236, 237, 272, 273, 275, 281, 287, 288, 292, 293, 298, 299 transitions, 235, 273, 275, 276, 280, 292, 293, 297, 298, 299 translation, 2, 30, 32, 63, 179, 181, 308 transmission, 8, 10, 12, 82, 83, 84, 135, 182 transparency, 175, 176 transport, vii, 1, 6, 24, 25, 30, 43, 53, 76, 77, 78, 81, 83, 88, 91, 100, 219, 263 transportation, 78, 80, 81 triggers, 195 tropical storms, 191 turbulence, 3, 25, 90 turbulent mixing, vii, 1, 2, 3, 14, 23, 24, 25, 30, 37, 53, 100 turnover, 233
T
U
Taiwan, 106, 107, 108, 241 targets, 76, 77, 81, 193, 200, 205, 218, 222 tax incentive, 73 taxation, 76 technical change, 190 temperature, viii, ix, 1, 2, 3, 4, 5, 8, 9, 10, 12, 13, 18, 19, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 39, 40, 41, 46, 50, 51, 52, 53, 54, 55, 64, 70, 83, 86, 99, 100, 101, 102, 104, 105, 107, 111, 115, 123, 129, 134, 168, 177, 192, 195, 200, 217, 221, 227, 235 tenure, x, 183, 192 territory, 150 terrorism, 88 thermal energy, 63, 64, 107, 110 thermodynamics, 12, 14, 30, 63, 115, 124, 129 thinking, 196 threats, 193 threshold, 195, 279 thresholds, 195, 250
Ukraine, 77 uncertainty, 3, 92, 216, 224, 226 UNESCO, 199 uniform, 12, 33, 101, 157, 177, 251, 252, 291, 293 United Kingdom, 66, 69, 82, 200, 211, 264, 312 United Nations, 82, 95 United Nations Development Programme, 95 United States, 72, 76, 77, 83, 217, 221 universe, 80, 133 updating, 243 uranium, 71, 82 urban life, 196 US Department of Commerce, 131 USSR, 179
V validation, 272, 278, 282, 284, 312
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,
Index variability, viii, ix, 1, 100, 116, 193, 199 variables, x, 4, 104, 203, 204, 206, 207, 209, 213, 222, 225, 226, 227, 229, 230, 231, 232, 233, 234, 235, 236, 238, 239, 272, 274, 291, 292, 299 variance, 103, 105, 119, 123, 222 vector, 4, 104, 105 vegetable oil, 76, 230 vegetation, 196 vehicles, 79, 80, 221 velocity, xi, 4, 5, 6, 24, 35, 61, 109, 201, 259, 260, 261, 262, 263, 264 Venezuela, 206 vessels, 45, 46, 101 Vietnam, 200 village, 70 viscosity, 12, 24, 104 vision, 177, 179 visualization, 229, 297 volatility, 204, 228 vulnerability, 187, 191
W
water supplies, 95 water vapor, 137, 139, 144, 200, 217 wave number, 41, 43 wealth, ix, x, 95, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 194, 197, 198 weapons, 82 web, 53, 268 wellness, 179 wells, 73 Western Europe, 76 wind, viii, xi, 2, 3, 4, 6, 12, 17, 26, 27, 28, 33, 34, 35, 36, 37, 42, 44, 50, 53, 59, 60, 61, 64, 69, 72, 75, 85, 87, 90, 91, 92, 94, 96, 97, 100, 101, 102, 109, 113, 116, 117, 126, 128, 205, 220, 222, 259, 260, 261, 262, 264, 265 wind speeds, 59, 117 wind turbines, 75 windows, 179, 221 windstorms, 195 winter, 106, 139, 144, 152, 157, 166, 170, 172, 173, 174, 175, 176 wintertime, 146, 149, 166, 174 wood, 77, 230 workload, 244, 255, 256, 271, 277, 278 World Bank, 75 World War I, 82 writing, 190, 264
X XML, 288, 296, 297, 298
Copyright © 2010. Nova Science Publishers, Incorporated. All rights reserved.
wage rate, 94 wages, 94 waking, 83 war, 88, 187 waste disposal, 73, 82, 205 waste management, 78, 86 waste treatment, 87 wastewater, 73, 74 water resources, 73, 74, 85, 95
325
Advances in Energy Research, Nova Science Publishers, Incorporated, 2010. ProQuest Ebook Central,