Computational Science - ICCS 2007: 7th International Conference, Beijing China, May 27-30, 2007, Proceedings, Part I (Lecture Notes in Computer Science, 4487) 3540725830, 9783540725831

Part of a four-volume set, this book constitutes the refereed proceedings of the 7th International Conference on Computa

151 122 88MB

English Pages 1442 [1310] Year 2007

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Title page
Preface
Organization
Table of Contents – Part I
A Composite Finite Element-Finite Difference Model Applied to Turbulence Modelling
Introduction
Theory
Solution Method
Model Applications
Conclusions
References
Vortex Identification in the Wall Region of Turbulent Channel Flow
Introduction
Vortex-Identification Techniques
Complex Eigenvalues of the Velocity-Gradient Tensor (Method A)
Second Invariant of the Velocity-Gradient Tensor (Method B)
Imaginary Part of the Complex Eigenvalue Pair of the Velocity-Gradient Tensor (Method C)
Analysis of the Hessian of the Pressure (Method D)
Numerical Simulations
Results
Concluding Remarks
References
Numerical Solution of a Two-Class LWR Traffic Flow Model by High-Resolution Central-Upwind Scheme
Introduction
The Tow-Class Model
Numerical Scheme
Numerical Examples
Conclusions
User-Controllable GPGPU-Based Target-Driven Smoke Simulation
Introduction
Mathematical Background
Implementation Details
Concluding Remarks
Variable Relaxation Solvefor Nonlinear Thermal Conduction
Introduction
First Order Relaxation and Numerical Schemes
Second Order Relaxation and Numerical Schemes
Variable Relaxations
Numerical Issues
Conclusions
A Moving Boundary Wave Run-Up Model
Introduction
Theory
Applications of the Model
Conclusions
References
Enabling Very-Large Scale Earthquake Simulations onParallel Machines
Introduction
Challenges for Porting and Optimization
Challenges for Initialization
Challenges for Executions
Challenges for Data Archive and Management
Challenges for Analysis of Results
Summary
References
Fast Insolation Computation in Large Territories
Introduction
The Sky
The Ground
The Horizon
The Calculation of Shaded Zones
Stewart Model and the Horizon Computation
Our Horizon Algorithm
Parallel Computing
The Accumulated Sky
Conclusions
Non-equilibrium Thermodynamics, Thermomechanics,Geodynamics
Introduction
Feedback Loops
Isothermal Thermomechanics
Non-isothermal Thermomechanics
Simplified Thermodynamic Approach for Geodynamics
Model
Results
Summary and Outlook
References
A Finite Element Model for Epidermal WoundHealing
Introduction
The Mathematical Model
Neo-vascularization
Wound Contraction
Numerical Method
Preliminary Results
Conclusions
Predicting Binding Sites of Hepatitis C VirusComplexes Using Residue Binding Propensityand Sequence Entropy
Introduction
Materials
Methods
Sequence Entropy
Binding Propensity
Encoding Scheme
Results and Discussion
Conclusion
Use of Parallel Simulated Annealing for ComputationalModeling of Human Head Conductivity
Introduction
Methods
Forward and Inverse Problems
Skull Inhomogeneities
Computational Design
Computational Results
Conclusion
Mining Molecular Structure Data for the Patterns ofInteractions Between Protein and RNA
Introduction
Identifying Hydrogen Bonds and van der Waals Contacts
Datasets of Protein-RNA Complexes
Hydrogen Bonds and van der Waals Contacts
Binding Propensity
Structure Elements of Protein and RNA
Algorithms for Extracting Secondary and Tertiary Structures of RNA
Algorithms for Analyzing Protein-RNA Complexes
Results and Discussion
Conclusion
References
Detecting Periodically Expression in Unevenly SpacedMicroarray Time Series
Introduction
Mathematical Theory and Algorithm
Reconstruction Algorithm of Signal in Aliased Shift-Invariant Signal Spaces
Lomb-Scargle(L-S) Periodogram Method
Singular Spectrum Analysis (SSA) Method
Fisher’s Test
Experimental Results
Design of Experiment
Simulated Data
Genes Data
Conclusions
References
Creating Individual Based Models of thePlankton Ecosystem
Introduction
Creating Models
Functional Groups and States
The Water Column
Variables and Constants
Special-Purpose Functions
Further Specification
Species
Scenario
Agent Initialisation and Management
Logging Options
Compilation and Execution
Applications and Example Results
Conclusion
A Hybrid Agent-Based Model of Chemotaxis
Introduction
Integrating Agents and Quantities
Chemotaxis
Conclusions
References
Block-Based Approach to Solving LinearSystems
Introduction
Public Domain Solvers
Average Vector Length
Indirect Memory Addressing
Block-Based Linear Iterative Solver (BLIS)
Available Functionality
Future Work
Performance
Numerical Example
BLIS Scaling on Vector Machine
Performance Comparison on Scalar and Vector Machines
Summary
Numerical Tests with Gauss-Type NestedImplicit Runge-Kutta Formulas
Introduction
Adaptive Gauss-Type NIRK Methods
Test Problems
Numerical Experiments
An Efficient Implementation of theThomas-Algorithm for Block Penta-diagonalSystems on Vector Computers
Introduction
Thomas Algorithm
Vector Architecture
Reordering the Computational Steps
Implementation Details and Numerical Results
Applying Gaussian Elimination to the Whole System
Applying Gaussian Elimination and Backward Substitution to One Block Row
Solution of a Sample BPD System
Summary
Compatibility of Scalapack with the DiscreteWavelet Transform
Introduction
Discrete Wavelet Transform
Wavelets and Linear Systems
Parallel DWT; Data Distributions
Example of Application: Parallel Implementation of the Wavelet-Schur Preconditioner
Numerical Results
Conclusions
A Model for Representing Topological RelationsBetween Simple Concave Regions
Introduction
Background
RCC8 and RCC23
Intersection Models
RCC62
16-Intersection Matrix
Basic Relations of RCC62
Relationship Between RCC23 and RCC62
CNG and CTRG of RCC62
Conclusions
Speech Emotion Recognition Based on a Fusionof All-Class and Pairwise-Class Feature Selection
Introduction
Emotion Corpus and Acoustic Features
Experiment Design
System Overview
Feature Subset Selection
Classifiers and Probability Estimates
Fusion Strategy
Performance Comparison
Conclusions and Future Work
Regularized Knowledge-Based Kernel Machine
Introduction
Prior Knowledge in Two-Class Classification
Regularized Knowledge-Based Kernel Classification Machine
Laminar/Turbulent Flow Pattern Data Set
Computational Results
Conclusion
References
Three-Phase Inverse Design Stefan Problem
Introduction
Three-Phase Problem
Genetic Algorithm
Numerical Example
Conclusion
Semi-supervised Clustering Using IncompletePrior Knowledge
Introduction
Algorithms
Seeded-KMeans
Tow Incomplete-Seeding KMeans Algorithms
Experiments
Conclusion
Distributed Reasoning with Fuzzy Description Logics
Introduction
E-Connection Between Fuzzy Description Logics
Fuzzy Links Between Two Knowledge Bases
Combined Distributed Fuzzy Description Logic Knowledge Bases
Semantical Discretization
Discrete Tableau Algorithm
Conclusion
Effective Pattern Similarity Match for MultidimensionalSequence Data Sets
Introduction
Representation of a Sequence
Pattern Match Using a Trend Vector
Similarity Between Segments
Similarity Between Sequences and Pattern Similarity Match Algorithm
Experimental Evaluation
Conclusions
References
GPU-Accelerated Montgomery Exponentiation
Introduction
Paper Organization
Background
General Purpose GPU Computations
The Montgomery Method
Proposed Algorithm
Overview
GPU Montgomery Product(GPU-MonPro)
GPU Montgomery Exponentiation (GPU-MonExp)
Algorithm Evaluation
Performance Test Overview
Test Results
Conclusions
Hierarchical-Matrix Preconditioners forParabolic Optimal Control Problems
Introduction
The Optimal Control Problem
Discretization in Space
Discretization in Time
Hierarchical-Matrices
Index Cluster Tree and Block Cluster Tree
H-Matrix Arithmetic
Algebraic Approaches for Hierarchical-Matrix Construction
Algebraic Approaches to Construct an Index Cluster Tree
Block Cluster Tree Construction for Algebraic Approaches
Hierarchical-Matrix Preconditioners
Experimental Results
Searching and Updating Metric Space Databases Using the Parallel EGNAT
Introduction
The EGNAT Data Structure and Algorithms
Efficient Parallelization of the EGNAT
Conclusions
An Efficient Algorithm and Its Parallelization for Computing PageRank
Introduction
PageRank Review
Related Work
Matrix-Vector Multiplication Treatment
The Blocking Algorithm
Dangling Nodes Treatment
The Parallelization of the Algorithm
Experimental Evaluation
Experimental Setup
Results for Handling Dangling Nodes
Results for the Parallelization of the Algorithm
Conclusions and Future Work
A Query Index for Stream Data Using Interval Skip Lists Exploiting Locality
Introduction
Related Work
QUISIS
Interval Skip Lists
Behavior of QUISIS
Experiments
Conclusion
Accelerating XML Structural Matching Using Suffix Bitmaps
Introduction
Suffix Bitmap
Global Order of Tag Names
Suffix Bitmap
Construction of the Suffix Bitmap
Variable-Length Suffix Bitmap
Structural Matching Filtering
Traversal Filtering
Cursor Stream Filtering
Experiments
Related Work
Conclusions and Future Work
References
Improving XML Querying with Maximal Frequent Query Patterns
Introduction
Related Work
Background
Maximal Frequent Query Pattern Tree
Query Rewriting
Query Processing
Maximal Frequent Query Patterns Discovery
Querying Algorithm
Cache Replacement
Experimental Evaluation
Dataset
Cache Performance
Conclusion
References
A Logic-Based Approach to Mining Inductive Databases
Introduction
Inductive Query Languages
Relational Computation for Association Rules
Calculus+Fixpoint
Fixpoint Algorithm
A Logic Query Language
Performance and Optimisation Issues
Conclusion
An Efficient Quantum-Behaved Particle Swarm Optimization for Multiprocessor Scheduling
Introduction
Quantum-Behaved Particle Swarm Optimization
The Standard PSO
Quantum-Behaved PSO
Multiprocessor Scheduling Based on QPSO
Particle-Based Solution Representation
Permutation-Based Representation
QPSO for Multiprocessor Scheduling
Experimental Results
Summary
Toward Optimizing Particle-Simulation Systems
Introduction
Background of Particle Simulation Systems
Event Management
Lazy Determination Strategy (LDS)
Cell Structure
LDS for Collision Events
LDS for Boundary-Crossing Events
Optimal Cell Number/Size
Experimental Results
Conclusions
A Modified Quantum-Behaved Particle Swarm Optimization
Introduction
Quantum-Behaved Particle Swarm Optimization
The Proposed Method
Numerical Experiments
Conclusions
References
Neural Networks for Predicting the Behavior of Preconditioned Iterative Solvers
Introduction
Background
Methodology
Construction of the Sample Space
Neural Network Parameters
User Scenarios
Using Information from the Preconditioner
Results
General Observations
Using Information from Only the System
Using Information from the Preconditioner
Discussion
References
On the Normal Boundary Intersection Method for Generation of Efficient Front
Introduction
NBI Algorithm: Original and a New Formulation
Relationship Between mNBI Subproblem and Weighted Sum Method
Relationship Between mNBI Subproblem and Goal Programming
Conclusions
An Improved Laplacian Smoothing Approach for Surface Meshes
Introduction
Outline of the Smoothing Procedure
Classifying the Vertices
Repositioning the Adjustable Vertices
Computing Displacements by the ILSA
Adjusting Displacements by a Quadratic Penalty Approach
Projecting the New Position Back to the Original Mesh
Experimental Results
Conclusion and Future Work
References
Red-Black Half-Sweep Iterative Method Using Triangle Finite Element Approximation for 2D Poisson Equations
Introduction
Formulation of the Half-Sweep Finite Element Approximation
Implementation of the HSGS-RB
Numerical Experiments
Conclusion
Optimizing Surface Triangulation Via Near Isometry with Reference Meshes
Introduction
Mesh Optimization and Isometric Mappings
Mesh Optimization for Curved Surfaces
Experimental Results
Optimization of 2-D Meshes
Optimization of Dynamic Meshes
Conclusion
Efficient Adaptive Strategy for Solving Inverse Problems
Introduction
The Automatic hp Adaptive Finite Element Method
The Relation Between the Objective Function Error and the Finite Element Method Error
Algorithm
Step-and-Flash Imprint Lithography
Numerical Results
The Molecular Static Model
Conclusions and Future Work
Topology Preserving Tetrahedral Decomposition of Trilinear Cell
Introduction
Trilinear Isosurface Topology
Topology Preserving Tetrahedral Decomposition
Trilinear Isosurface Triangulation
Conclusion
FITTING: A Portal to Fit Potential Energy Functionals to ab initio Points
Introduction
A Generalization of the LEPS Potential
The Internet Portal Structure
The N + O_2 Case Study
Conclusions
Impact of QoS on Replica Placementin Tree Networks
Introduction
Framework and Access Policies
Complexity Results
Heuristics for the Replica Placement Problem
Experimental Plan
Conclusion
Generating Traffic Time Series Based on Generalized Cauchy Process
Introduction
Concept of GC Process
Computational Model
Case Study
Conclusions
References
Reliable and Scalable State Management Using Migration of State Information in Web Services
Introduction
Related Works
Proposed State Management Model
Implementation
Performance Evaluation
Conclusions and Future Works
References
Efficient and Reliable Execution of Legacy Codes Exposed as Services
Introduction
State of the Art
LB-FT Framework Architecture
Fault Tolerance and Load Balancing Scenarios
Performance Evaluation
Conclusion
Provenance Provisioning in Mobile Agent-Based Distributed Job Workflow Execution
Introduction
Partner Identification in the MCCF
Provenance Recording and Collection Protocol
Performance Evaluation
Conclusion and Future Work
EPLAS: An Epistemic Programming Language for All Scientists
Introduction
Requirements
EPLAS
An Interpreter Implementation of EPLAS
Concluding Remarks
Translation of Common Information Model to Web Ontology Language
Introduction
CIM to OWL Mapping
Related Work
Conclusions
XML Based Semantic Data Grid Service
Introduction
Mediator-Wrapper Based Semantic Data Grid Service
Communication Mechanism with Semantic Grid
Basic Communication Atom
Semantic Fusion Atom
Semantic XML Query Rewriting
Discussion and Conclusion
Communication-Aware Scheduling Algorithm Based on Heterogeneous Computing Systems
Introduction
Preliminaries
The Proposed Algorithm
Experiment Results
References
Macro Adjustment Based Task Scheduling in Hierarchical Grid Market
Introduction
The Hierarchical Grid Market
Macro Adjustment Based Grid Task Scheduling
Simulation Experiments
Conclusions
References
DGSS: A Dependability Guided Job Scheduling System for Grid Environment
Introduction
Related Work
Dependability Guided Job Scheduling System - DGSS
MC Based Node TTF Prediction Model
Job Completion Time Prediction Model
Dependable Scheduling Algorithm
Performance Evaluation
Conclusions
References
An Exact Algorithm for the Servers Allocation, Capacity and Flow Assignment Problem with Cost Criterion and Delay Constraint in Wide Area Networks
Introduction
Problem Formulation
The Branch and Bound Algorithm
Computational Results
Conclusion
References
Adaptive Divisible Load Model for Scheduling Data-Intensive Grid Applications
Introduction
Scheduling Model
Notations an Definitions
Cost Model
Adaptive Scheduling Model
Computation Time Fraction
Communication Time Fraction
The Final Form
Numerical Experiments
Conclusion
Providing Fault-Tolerance in Unreliable Grid Systems Through Adaptive Checkpointing and Replication
Introduction
Related Work
The System Model
Adaptive Checkpointing Strategies
Last Failure Dependent Checkpointing (LFDC)
Mean Failure Dependent Checkpointing (MFDC)
Adaptive Checkpoint and Replication-Based Scheduling (ACRS)
Simulation Results
Conclusions and Future Work
A Machine-Learning Based Load Prediction Approach for Distributed Service-Oriented Applications
Introduction
Load Prediction Method for Service-Oriented Applications
Model of the Load Balancing Middleware
Machine-Learning Based Load Prediction
Conclusions
References
A Balanced Resource Allocation and Overload Control Infrastructure for the Service Grid Environment
Introduction
Architecture of the Load Balancing Middleware
Balanced Resource Allocation and Overload Control
Load Metrics
On-Demand Creation and Destruction of the Replicas
Performance Results
Conclusions
References
Recognition and Optimization of Loop-Carried Stream Reusing of Scientific Computing Applications on the Stream Processor
Introduction
Loop-Carried Stream Reusing
Recognizing Loop-Carried Stream Reusing
Optimizing Loop-Carried Stream Reusing
Experiment
Conclusion and Future Work
References
A Scalable Parallel Software Volume Rendering Algorithm for Large-Scale Unstructured Data
Introduction
Related Work
The Parallel Software Scanned Cell Projection Algorithm
Data Distribution
Parallel k-d Tree
Creating an A-Buffer
Task Management with Asynchronous Communication
Image Composition
Experimental Results
Conclusions and Future work
References
Geometry-Driven Nonlinear Equation with an Accelerating Coupled Scheme for Image Enhancement
Introduction
Geometry-Driven Shock-Diffusion Equation
Differentials of a Typical Ramp Edge and Edge Sharpening
Local Differential Structure of Image
The Geometry-Driven Shock-Diffusion Equation
Numerical Implementation and Experimental Results
A Shock Capturing Scheme
The Coupled Iteration
Experiments
Conclusions
References
A Graph Clustering Algorithm Based on Minimum and Normalized Cut
Introduction
The MAN-C Algorithm
Properties of MAN-C Algorithm
Experimental Results
Conclusions
References
A-PARM: Adaptive Division of Sub-cells in the PARM for Efficient Volume Ray Casting
Introduction
A-PARM (Adaptive-Precomputed scAlar and gRadient Map)
PARM
A-PARM
Experimental Results
Conclusion
References
Inaccuracies of Shape Averaging Method Using Dynamic Time Warping for Time Series Data
Introduction
Background
Distance Measurement
Dynamic Time Warping Distance
Dynamic Time Warping Averaging
Experiment Evaluation
Does Reordering Make Any Differences?
Correctness of DTW Averaging Between Two Time Series
Can Cluster Center Drift Out of the Cluster?
Failure in K-Means Clustering with DTW
Discussion, Conclusion, and Future Works
References
An Algebraic Substructuring Method for High-Frequency Response Analysis of Micro-systems
Introduction
Algebraic Substructuring
Frequency Response Analysis
ASFRA Algorithm
Numerical Experiment
Conclusion
Multilevel Task Partition Algorithm for Parallel Simulation of Power System Dynamics
Introduction
Multilevel Partition Algorithm
Establish the Layered Model
Weights for Graph Vertex and Edge
Evaluation Function for Partition
Refine Results
Test Results
Reference
An Extended Implementation of the Great Deluge Algorithm for Course Timetabling
Introduction
The University Course Timetabling Problem
The Extended Great Deluge Algorithm
Experiments and Results
Conclusions and Future Work
References
Cubage-Weight Balance Algorithm for the Scattered Goods Loading with Two Aims
Introduction
Model Description, Assumptions and Notation
Method and Algorithm
One Truck Loading
M(M>1) Truck Loading
Numerical Investigation and Discussion
One Truck Loading
M(M>1) Truck Loading
Conclusion and Application
References
Modeling VaR in Crude Oil Market:A Multi Scale Nonlinear Ensemble Approach Incorporating Wavelet Analysis and ANN
Introduction
Background and Theories
Wavelet Decomposed Nonlinear Ensemble Value at Risk (WDNEVaR)
Empirical Studies
Conclusions
References
On the Assessment of Petroleum Corporation’s Sustainability Based on Linguistic Fuzzy Method
Introduction
The Maximum Entropy Model with the Attitude of Decision
OWA Operator
The 2-Tuple Linguistic Term and Uncertainty Fuzzy Number
The Aggregation Approach
Numerical Example
Conclusions
References
A Multiagent Model for Supporting Tourism Policy-Making by Market Simulations
Introduction
Tourist Destination
Tourist Agent
A Preliminary Test
Conclusions and Future Work
An Improved Chaos-Based Image Encryption Scheme
Introduction
The Statistical Properties of Logistic Map
Key Stream Generator Construction and Its Performance Analysis
Key Stream Generator Construction
Balance Performance Analysis
Correlativity Performance Analysis
Security Performance Comparison
Experimental Results
Conclusion
References
A Factory Pattern in Fortran 95
Introduction
An Object-Based Electrostatic Particle Simulation
Extension to Electromagnetic Particles
A Factory Pattern
Discussion
Mapping Pipeline Skeletons onto Heterogeneous Platforms
Introduction
Framework
Complexity Results
Heuristics
Experiments
Conclusion
On the Optimal Object-Oriented Program Re-modularization
Introduction
The Optimal Re-modularization of a Program
Performance Estimation of a Re-modularized Program
Conclusions
References
A Buffered-Mode MPI Implementation for the Cell BE™ Processor
Introduction
Cell Architecture
MPI Design
MPI Initialization
Point-to-Point Communication
Performance Evaluation
Related Work
Limitations and Future Work
Conclusions
References
Implementation of the Parallel Superposition in Bulk-Synchronous Parallel ML
Introduction
Functional Bulk-Synchronous Parallel ML
Implementation
New Implementation of the Primitive of BSML
Functions for the Implementation of the Superposition
Example and Benchmarks
Calculus of the Prefix Using the Parallel Superposition
Benchmarks
Related works
Conclusion
Parallelization of Generic Libraries Based on Type Properties
Introduction
Related Work
Generic Programming
Parallelizing Libraries
Performance Evaluation
Conclusions
Traffic Routing Through Off-Line LSP Creation
Introduction
CEP for Load Balancing Purpose
Heuristic Approach and Results
References
Simulating Trust Overlay in P2P Networks
Introduction
Related Work
Trust Overlay Simulator Design
Threat Model
Simulation Experiments and Analysis
Conclusions
References
Detecting Shrew HTTP Flood Attacks for Flash Crowds
Introduction
Related Work
Detection Algorithms
File Popularity Matrix
HsMM
PCA and ICA
Detection Architecture
Experiments
Conclusions and Future Work
References
A New Fault-Tolerant Routing Algorithm for m-ary n-cube Multi-computers and Its Performance Analysis
Introduction
Routing Algorithm
Performance Analysis of Routing Algorithm
Conclusion
CARP: Context-Aware Resource Provisioning for Multimedia over 4G Wireless Networks
Introduction
Related Works: Mobility in 4G Wireless Systems
Symbolic Representation of 4G Wireless Systems
Context-Awareness in Heterogenous Wireless Networks
Probabilistic Estimation of User-Mobility
Context-Aware Route Collection
Advanced Resource Provisioning
Simulation Results
Conclusions
Improved Fast Handovers for Mobile IPv6 over IEEE 802.16e Network
Introduction
Fast Handovers for MIPv6 over IEEE 802.16e
The Proposed Scheme
Performance Evaluation
System Modeling
Cost Analysis
Numerical Results and Analysis
Conclusions and Discussion
Advanced Bounded Shortest Multicast Algorithm for Delay Constrained Minimum Cost
Introduction
Bounded Shortest Multicast Algorithm
Difficulties in BSMA
Undirected Graph Model for BSMA
Meaning of `Unmark' the Superedge
The Delay-Bounded Minimum-Cost Path ps and the Function to Get the ps Between T1j and T2j
Inefficient Use of k-Shortest Path Algorithm
Conclusion
Efficient Deadlock Detection in Parallel Computer Systems with Wormhole Routing
Introduction
The Proposed Mechanism
Performance
Conclusions
Type-Based Query Expansion for Sentence Retrieval
Introduction
Related Works
Sentence Retrieval Model Involved with Query Type
Information Association Based Classification
Relevance Ranking of Sentence
Experiments
Conclusion
References
An Extended R-Tree Indexing Method Using Selective Prefetching in Main Memory
Introduction
Related Works
Selective Prefetching
Cache Conscious Index Methods
SPR-Tree
Structure
Algorithms
Performance Evaluation
Conclusion
References
Single Data Copying for MPI Communication Optimization on Shared Memory System
Introduction
Motivation
Design and Implementation
Optimized Communication Protocol
Busy-Waiting Polling Strategy
Performance Evaluation
Platform Configuration
Performance Evaluation Results
Conclusion
Adaptive Sparse Grid Classification Using Grid Environments
Introduction
Grid-Based Development
Adaptivity and Results
Classification Examples
Dimension Adaptivity
Summary
Latency-Optimized Parallelization of the FMM Near-Field Computations
Introduction
Constraints for the Parallelization
Implementation
Initial Data Distribution
Design Steps
Results
Outlook
Efficient Generation of Parallel Quasirandom Faure Sequences Via Scrambling
Introduction
Scrambling
Parallelization and Implementations
Applications
Conclusions
Complexity of Monte Carlo Algorithms for a Class of Integral Equations
Introduction
Formulation of the Problem
A Class of Grid Monte Carlo Algorithms for Integral Equations
Cubature Method
Resolvent Monte Carlo Method for SLAE
Estimate of the Computational Complexity
A Grid Monte Carlo Algorithm
A Grid-Free Monte Carlo Algorithm
Concluding Discussion
Modeling of Carrier Transport in Nanowires
Introduction
Physical Model
The Monte Carlo Method
Grid Implementation and Numerical Results
Conclusion
Monte Carlo Numerical Treatment of Large Linear Algebra Problems
Introduction
Markov Chain Monte Carlo
Numerical Experiments
Conclusion
Simulation of Multiphysics Multiscale Systems: Introduction to the ICCS’2007 Workshop
Introduction
Overview of Work Presented in This Workshop
Conclusions
References
Simulating Weed Propagation Via Hierarchical, Patch-Based Cellular Automata
Introduction and Context
Background
Method
Results and Discussion
A Multiscale, Cell-Based Framework for Modeling Cancer Development
Introduction
Model
Parallelization
Result
Discussion and Outlook
Stochastic Modelling and Simulation of Coupled Autoregulated Oscillators in a Multicellular Environment: The her1/her7 Genes
Introduction
Methods
Results and Discussion
Conclusions
Multiscale Modeling of Biopolymer Translocation Through a Nanopore
Introduction
Numerical Set-Up
Translocation Dynamics
Effects of Parameter Values
Mapping to Real Biopolymers
Conclusions
Multi-physics and Multi-scale Modelling in Cardiovascular Physiology: Advanced User Methods for Simulation of Biological Systems with ANSYS/CFX
Introduction
ANSYS/CFX Special Features: Crossing the Flow Interface Towards Biological Modelling
Case Study: Coupling a Model of the Left Ventricle to an Idealized Heart Valve
A Multi-scale Model of the Left Ventricle
Computational Model of the Mitral Valve and Coupling with the LV Model
Towards Other Biological Applications
Conclusions and Perspectives
References
Lattice Boltzmann Simulation of Mixed Convection in a Driven Cavity Packed with Porous Medium
Introduction
Numerical Method: The Lattice Boltzmann Method
Numerical Results and Discussion
Effect of the Reynolds Number (Re)
Effect of the Richardson Number (Ri)
Effect of the Darcy Number (Da)
Conclusion
Numerical Study of Cross Diffusion Effects on Double Diffusive Convection with Lattice Boltzmann Method
Introduction
The Lattice Boltzmann Method for the Formulation of the Problem
Numerical Results and Discussion
Summary
Lattice Boltzmann Simulation of Some Nonlinear Complex Equations
Introduction
Lattice Boltzmann Model
LB Model for CDE
Version of LB Model for Complex CDE
Simulation Results
Conclusion
A General Long-Time Molecular Dynamics Scheme in Atomistic Systems: Hyperdynamics in Entropy Dominated Systems
Introduction
Theory
A New Constitutive Model for the Analysis of Semi-flexible Polymers with Internal Viscosity
Introduction
The Governing Equations
Results and Examples
The Material Coefficients in Steady Shear Flows
The Extensional Viscosity in Steady Extensional Flows
Conclusions
Coupled Navier-Stokes/DSMC Method for Transient and Steady-State Gas Flows
Introduction
The Coupling Method
The CFD Solver
The Molecular Algorithm: DSMC
Schwarz Coupling
Results
1-D Shock-Tube Problem
2-D Expanding Jet in a Low Pressure Chamber
Conclusions
Multi-scale Simulations of Gas Flows with Unified Flow Solver
Introduction
Examples of Multi-scale Simulations with UFS
High Speed Flows
Low Speed Flows
Future Directions
Instabilities
Multi-scale Simulation of Turbulent Flows
Multiscale Flow Structures in Open Systems
Conclusion
References
Coupling Atomistic and Continuum Models for Multi-scale Simulations of Gas Flows
Introduction
Unified Flow Solver
UFS Solution Procedure
Molecular Gases and Gas Mixtures
Multi-scale Models for Computational Material Science
Bridging Disparate Length and Time Scales
Multi-scale Computational Framework
Conclusion
References
Modelling Macroscopic Phenomena with Cellular Automata and Parallel Genetic Algorithms: An Application to Lava Flows
Introduction
Model Specification
Model Calibration
A Case of Study
Conclusions
Acceleration of Preconditioned Krylov Solvers for Bubbly Flow Problems
Introduction
DICCG Method
Application of DICCG to Bubbly Flow Problems
Numerical Experiments
Test Case 1: Stationary Problem
Test Case 2: Time-Dependent Problem
Conclusions
An Efficient Characteristic Method for the Magnetic Induction Equation with Various Resistivity Scales
Introduction
A Characteristic Finite Element Method for the Magnetic Field Induction Equation
Numerical Experiments on the Resistivity Scales
Concluding Remarks
Multiscale Discontinuous Galerkin Methods for Modeling Flow and Transport in Porous Media
Introduction
Discontinuous Galerkin Schemes
Governing Equations
DG Schemes
Multiscale Formulation for Discontinuous Galerkin
Numerical Results
Conclusions
Fourier Spectral Solver for the Incompressible Navier-Stokes Equations with Volume-Penalization
Introduction
Fourier Collocation Scheme
Results
Conclusion and Discussion
High Quality Surface Mesh Generation for Multi-physics Bio-medical Simulations
Introduction
Related Work
Method
Conclusions
Macro-micro Interlocked Simulation for Multiscale Phenomena
Introduction
Applications
Cloud Simulation
Detonation Simulation
Plasma Simulation
Summary
Towards a Complex Automata Framework for Multi-scale Modeling: Formalism and the Scale Separation Map
Introduction
Complex Automata Modeling
Definition
Execution Model
Genericity of CA's
Coupling Mechanisms
The Scale Separation Map
Examples
Grid Refinement
Time Splitting in Reaction-Diffusion
In-stent Restenosis
Discussion and Conclusions
Multilingual Interfaces for Parallel Coupling in Multiphysics and Multiscale Systems
Introduction
The Model Coupling Toolkit
Construction of the Multi-lingual Interfaces for MCT
The MCT Multilingual Programming Model
C++
Python
Performance
Conclusions
On a New Isothermal Quantum Euler Model: Derivation, Asymptotic Analysis and Simulation
Introduction
Derivation of the Model
The Underlying Quantum Microscopic Model
The Moment Method
The Quantum Free Energy Minimization Principle
The Quantum Euler Model
Special Case of Irrotational Flows
Formal Asymptotics
Semiclassical Asymptotics
The Zero Temperature Limit
Numerical Results
Conclusion and Perspectives
Grate Furnace Combustion: A Submodel for the Solid Fuel Layer
Introduction
Model Equations and Data
Results
Conclusions
Introduction to the ICCS 2007 Workshop on Dynamic Data Driven Applications Systems
Introduction
Overview of Work Presented in This Workshop
Summary
References
Pharmaceutical Informatics and the Pathway to Personalized Medicines
Towards Real-Time Distributed Signal Modeling for Brain-Machine Interfaces
Introduction
A Mixed-Model DDDBMI Architecture
Experimental Design
Computational Framework
Weight Training for Linear Models
Inverse Model Computation
RL Computation
Results
Conclusions
References
Using Cyber-Infrastructure for Dynamic Data Driven Laser Treatment of Cancer
Introduction
Software Architecture
Image Segmentation, Meshing, and MRTI-Registration
Calibration, Optimal Control, and Error Estimation
Data Transfer, Visualization, and Current Results
Conclusions
Grid-Enabled Software Environment for Enhanced Dynamic Data-Driven Visualization and Navigation During Image-Guided Neurosurgery
Introduction
Near-Real-Time Non-rigid Registration
Registration Algorithm
Implementation Objectives
Implementation Details
Initial Evaluation Results
Discussion
From Data Reverence to Data Relevance: Model-Mediated Wireless Sensing of the Physical Environment
Introduction
Dynamic Control of Network Activity
Example: Soil Moisture Processes
Looking Ahead
AMBROSia: An Autonomous Model-Based Reactive Observing System
Introduction
A Mathematical Formulation of Sample Selection
Scalar Field Estimation
Faults in Sensor Data
Concluding Remarks
Dynamically Identifying and Tracking Contaminants in Water Bodies
Introduction
The SSSI
Problem Solving Environment SCIRun
Accurate Predictions
Conclusions
Hessian-Based Model Reduction for Large-Scale Data Assimilation Problems
Introduction
Reduced-Order Dynamical Systems
Hessian-Based Model Reduction
Application: Model Reduction for 3D Contaminant Transport in an Urban Canyon
Localized Ensemble Kalman Dynamic Data Assimilation for Atmospheric Chemistry
Introduction
Ensemble Kalman Filter Data Assimilation
The Localization of EnKF (LEnKF)
Preventing Filter Divergence
Inflation Localization
Numerical Results
Joint State and Parameter Assimilation
Conclusions
Data Assimilation in Multiscale Chemical Transport Models
Introduction
The STEM Chemical Transport Model
Chemical Transport Model in STEM
Continuous Adjoint Model in STEM
Properties of STEM
4D-Var Data Assimilation in STEM
Results for Data Assimilation
Assessment of Three Optimization Methods
Conclusions
References
Building a Dynamic Data Driven Application System for Hurricane Forecasting
Introduction
Data Driven Hurricane Forecast Scenario
System Requirements
DynaCode Components
A Dynamic Data Driven Wildland Fire Model
Introduction
Fire Model
Coefficient Identification
Numerical Solution
Data Assimilation
Data Acquisition
Visualization
Ad Hoc Distributed Simulation of Surface Transportation Systems
Introduction
Ad Hoc Distributed Simulations
Transportation Simulation Models
Real-Time Dynamic Data Analysis
Data Dissemination
Future Research
References
Cyberinfrastructure for Contamination Source Characterization in Water Distribution Systems
Introduction
Related Work
Architecture
Simulation Model Enhancements
Job Submission Middleware
Real-Time Visualization
Performance Results
Conclusions and Future Work
References
Integrated Decision Algorithms for Auto-steered Electric Transmission System Asset Management
Introduction
Overall Design and Recent Progress
Layer 5: Simulation, Decision and Information Valuation
Benders Decomposition and Illustration
Conclusions
References
DDDAS for Autonomic Interconnected Systems: The National Energy Infrastructure
Introduction
Distributed Dynamic Sensing
Distributed Heterogeneous Simulation
Visualization
Neural Distributed Agents
Summary
Implementing Virtual Buffer for Electric Power Grids
Introduction
Virtual Buffer for Electric Power Grid
CIMEG and TELOS
Conclusions and Future Work
References
Enhanced Situational Awareness: Application of DDDAS Concepts to Emergency and Disaster Management
Introduction
Dynamic Data Driven Application Systems
Data
Design
Data, Algorithms and Analysis
Structure and Tie Strengths in Mobile Communication Networks
Anomaly Detection Using a Markov Modulated Poisson Process
Mapping and Visualization
Hybrid Clustering Algorithm for Outlier Detection on Streaming Data
Quantitative Social Group Dynamics on a Large Scale
Summary
References
AIMSS: An Architecture for Data Driven Simulations in the Social Sciences
Introduction: Intelligent Assistance for Model Development
Social Science Case Study
Data Sources
Interpretation and Consistency Checking
Data Mining: Recognising General Patterns
Consistency-Checking
Towards Dynamic Reconfiguration and Adaptation
Adaptation and Model Revision
Limitations of the Current Approach
Conclusion
Bio-terror Preparedness Exercise in a Mixed Reality Environment
Introduction
Computational Modeling
Virtual Geographies
Computational Epidemiology of Synthetic Population
Public Opinion Model
Economic Impact Model
Application: Indiana District 3 Full-Scale Exercise 2007
Detailed Public Health Statistics for District 3
Economic Impact for District 3
Public Opinion Impact for District 3
Conclusion
Dynamic Tracking of Facial Expressions Using Adaptive, Overlapping Subspaces
Introduction
Related Work
Learning Shape Manifold
Image Search in the Clustered Shape Space
Dynamic Data Driven Tracking Framework
Deformable Model Based 3D Face Tracking
Conclusion
Realization of Dynamically Adaptive Weather Analysis and Forecasting in LEAD: Four Years Down the Road
Introduction
System Model
Dynamic Data Mining
System Monitoring
Adaptation for Performability
Conclusion
Active Learning with Support Vector Machines for Tornado Prediction
Introduction
Data and Analysis
Methodology
Support Vector Machines
Active Learning with SVMs
Measuring the Quality of the Forecasts for Tornado Prediction
Experiments
Results
Conclusions
References
Adaptive Observation Strategies for Forecast Error Minimization
Introduction
Models of Non-linear Weather Prediction
State Estimation
A Targeting Algorithm for Multiple Mobile Sensor Platforms
Trajectory Learning
Experimental Results
Conclusion
Two Extensions of Data Assimilation by Field Alignment
Introduction
Multivariate Formulation
Application to Velocimetry
Discussion and Conclusions
A Realtime Observatory for Laboratory Simulation of Planetary Circulation
Introduction
The Observatory
Physical Simulation and Visual Observation
Numerical Model
State Estimation
Conclusions
Planet-in-a-Bottle: A Numerical Fluid-Laboratory System
Introduction
The Laboratory Abstraction: Planet-in-a-Bottle
Progress
Compressed Sensing and Time-Parallel Reduced-Order Modeling for Structural Health Monitoring Using a DDDAS
Introduction
An Efficient Data Compression Scheme for Sensor Networks
Compressed Sensing Overview
Implementation
Summary
Preliminary Verification
A Time-Parallel Computational Framework for Faster Numerical Predictions
Nonlinear Structural Dynamics Computational Models
Nonlinear PITA
Preliminary Performance Assessment
Multi-level Coupling of Dynamic Data-Driven Experimentation with Material Identification
Introduction
Multi-level Approach
Composite Material System Application
Performance Measures
Loading Path Optimization
Numerical Application
Conclusions
Evaluation of Fluid-Thermal Systems by Dynamic Data Driven Application Systems - Part II
Introduction
Dynamic Data Driven Application Systems
Description of Research
Objective
Configuration
Experiments
Simulations
Validation
Response Surface Models
DDDAS Methodology
Results
Conclusions
Dynamic Data-Driven Fault Diagnosis of Wind Turbine Systems
Introduction
Signal Pre-processing and Local-Level Detection
Dynamic Interrogation and Decision-Making
Summary
References
Building Verifiable Sensing Applications Through Temporal Logic Specification
Introduction
Overview of Proposed Approach
Evaluation
Specifying Resource-Quality Tradeoffs
Concluding Remarks
Dynamic Data-Driven Systems Approach for Simulation Based Optimizations
Introduction
Optimization Challenges in Large-Scale Domains
Distributed Multi-block Simulations
Data Management and Processing
Conclusions
DDDAS/ITR: A Data Mining and Exploration Middleware for Grid and Distributed Computing
Introduction
Anomaly Detection
Detecting Distributed Attacks
Grid Middleware
Prototype Implementation
Performance Evaluation
Performance Evaluation Functional Units
Conclusion
References
A Combined Hardware/Software Optimization Framework for Signal Representation and Recognition
Introduction
Mathematical Foundations
Walsh Wavelet Packets
Best-Basis Algorithm
Exploiting Transformation Structure in Hardware
Compressed Representation
Optimized Implementation for a Row
Computation Reuse
Adaptive Transformation Application
Preliminary Results
Conclusion
Validating Evolving Simulations in COERCE
Introduction
Representations of Uncertainty
Imprecise Probabilities
Probability Boxes
Dempster-Shafer Theory of Evidence
Lightweight Validation
Validating Emergent Behaviors
Summary
References
Equivalent Semantic Translation from Parallel DEVS Models to Time Automata
Introduction
Formal Semantic for Parallel DEVS Model
Parallel DEVS [2]
Timed Transition System (TTS)
Semantic of DEVS Model Based on TTS
Semantic of Coupled Parallel DEVS Model Based on TA
Timed Automata
From Atomic Finite DEVS to TA
From Coupled DEVS Model to Composition of TAs
Semantic of the Whole DEVS Model
Conclusion
References
Author Index
Recommend Papers

Computational Science - ICCS 2007: 7th International Conference, Beijing China, May 27-30, 2007, Proceedings, Part I (Lecture Notes in Computer Science, 4487)
 3540725830, 9783540725831

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany

4487

Yong Shi Geert Dick van Albada Jack Dongarra Peter M.A. Sloot (Eds.)

Computational Science – ICCS 2007 7th International Conference Beijing, China, May 27 - 30, 2007 Proceedings, Part I

13

Volume Editors Yong Shi Graduate University of the Chinese Academy of Science Beijing 100080, China E-mail: [email protected] Geert Dick van Albada Peter M.A. Sloot University of Amsterdam, Section Computational Science 1098 SJ Amsterdam, The Netherlands E-mail: {dick, sloot}@science.uva.nl Jack Dongarra University of Tennessee, Computer Science Department Knoxville, TN 37996-3450, USA E-mail: [email protected]

Library of Congress Control Number: 2007927049 CR Subject Classification (1998): F, D, G, H,I.1, I.3. I.6, J, K.3 C.2-3 LNCS Sublibrary: SL 1 – Theoretical Computer Science and General Issues ISSN ISBN-10 ISBN-13

0302-9743 3-540-72583-0 Springer Berlin Heidelberg New York 978-3-540-72583-1 Springer Berlin Heidelberg New York

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com © Springer-Verlag Berlin Heidelberg 2007 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12065691 06/3180 543210

Preface

The Seventh International Conference on Computational Science (ICCS 2007) was held in Beijing, China, May 27-30, 2007. This was the continuation of previous conferences in the series: ICCS 2006 in Reading, UK; ICCS 2005 in Atlanta, Georgia, USA; ICCS 2004 in Krakow, Poland; ICCS 2003 held simultaneously at two locations in, Melbourne, Australia and St. Petersburg, Russia; ICCS 2002 in Amsterdam, The Netherlands; and ICCS 2001 in San Francisco, California, USA. Since the first conference in San Francisco, the ICCS series has become a major platform to promote the development of Computational Science. The theme of ICCS 2007 was “Advancing Science and Society through Computation.” It aimed to bring together researchers and scientists from mathematics and computer science as basic computing disciplines, researchers from various application areas who are pioneering the advanced application of computational methods to sciences such as physics, chemistry, life sciences, and engineering, arts and humanitarian fields, along with software developers and vendors, to discuss problems and solutions in the area, to identify new issues, and to shape future directions for research, as well as to help industrial users apply various advanced computational techniques. During the opening of ICCS 2007, Siwei Cheng (Vice-Chairman of the Standing Committee of the National People’s Congress of the People’s Republic of China and the Dean of the School of Management of the Graduate University of the Chinese Academy of Sciences) presented the welcome speech on behalf of the Local Organizing Committee, after which Hector Ruiz (President and CEO, AMD) made remarks on behalf of international computing industries in China. Seven keynote lectures were delivered by Vassil Alexandrov (Advanced Computing and Emerging Technologies, University of Reading, UK) - Efficient Scalable Algorithms for Large-Scale Computations; Hans Petter Langtangen (Simula Research Laboratory, Lysaker, Norway) - Computational Modelling of Huge Tsunamis from Asteroid Impacts; Jiawei Han (Department of Computer Science, University of Illinois at Urbana-Champaign, USA) - Research Frontiers in Advanced Data Mining Technologies and Applications; Ru-qian Lu (Institute of Mathematics, Chinese Academy of Sciences) - Knowledge Engineering and Knowledge Ware; Alessandro Vespignani (School of Informatics, Indiana University, USA) -Computational Epidemiology and Emergent Disease Forecast; David Keyes (Department of Applied Physics and Applied Mathematics, Columbia University) - Scalable Solver Infrastructure for Computational Science and Engineering; and Yves Robert (Ecole Normale Suprieure de Lyon , France) - Think Before Coding: Static Strategies (and Dynamic Execution) for Clusters and Grids. We would like to express our thanks to all of the invited and keynote speakers for their inspiring talks. In addition to the plenary sessions, the conference included 14 parallel oral sessions and 4 poster sessions. This year, we

VI

Preface

received more than 2,400 submissions for all tracks combined, out of which 716 were accepted. This includes 529 oral papers, 97 short papers, and 89 poster papers, spread over 35 workshops and a main track. For the main track we had 91 papers (80 oral papers and 11 short papers) in the proceedings, out of 360 submissions. We had some 930 people doing reviews for the conference, with 118 for the main track. Almost all papers received three reviews. The accepted papers are from more than 43 different countries and 48 different Internet top-level domains. The papers cover a large volume of topics in computational science and related areas, from multiscale physics to wireless networks, and from graph theory to tools for program development. We would like to thank all workshop organizers and the Program Committee for the excellent work in maintaining the conference’s standing for high-quality papers. We would like to express our gratitude to staff and graduates of the Chinese Academy of Sciences Research Center on Data Technology and Knowledge Economy and the Institute of Policy and Management for their hard work in support of ICCS 2007. We would like to thank the Local Organizing Committee and Local Arrangements Committee for their persistent and enthusiastic work towards the success of ICCS 2007. We owe special thanks to our sponsors, AMD, Springer; University of Nebraska at Omaha, USA and Graduate University of Chinese Academy of Sciences, for their generous support. ICCS 2007 was organized by the Chinese Academy of Sciences Research Center on Data Technology and Knowledge Economy, with support from the Section Computational Science at the Universiteit van Amsterdam and Innovative Computing Laboratory at the University of Tennessee, in cooperation with the Society for Industrial and Applied Mathematics (SIAM), the International Association for Mathematics and Computers in Simulation (IMACS), the Chinese Society for Management Modernization (CSMM), and the Chinese Society of Optimization, Overall Planning and Economical Mathematics (CSOOPEM). May 2007

Yong Shi

Organization

ICCS 2007 was organized by the Chinese Academy of Sciences Research Center on Data Technology and Knowledge Economy, with support from the Section Computational Science at the Universiteit van Amsterdam and Innovative Computing Laboratory at the University of Tennessee, in cooperation with the Society for Industrial and Applied Mathematics (SIAM), the International Association for Mathematics and Computers in Simulation (IMACS), and the Chinese Society for Management Modernization (CSMM).

Conference Chairs Conference Chair - Yong Shi (Chinese Academy of Sciences, China/University of Nebraska at Omaha USA) Program Chair - Dick van Albada (Universiteit van Amsterdam, The Netherlands) ICCS Series Overall Scientific Co-chair - Jack Dongarra (University of Tennessee, USA) ICCS Series Overall Scientific Chair - Peter M.A. Sloot (Universiteit van Amsterdam, The Netherlands)

Local Organizing Committee Weimin Zheng (Tsinghua University, Beijing, China) – Chair Hesham Ali (University of Nebraska at Omaha, USA) Chongfu Huang (Beijing Normal University, Beijing, China) Masato Koda (University of Tsukuba, Japan) Heeseok Lee (Korea Advanced Institute of Science and Technology, Korea) Zengliang Liu (Beijing University of Science and Technology, Beijing, China) Jen Tang (Purdue University, USA) Shouyang Wang (Academy of Mathematics and System Science, Chinese Academy of Sciences, Beijing, China) Weixuan Xu (Institute of Policy and Management, Chinese Academy of Sciences, Beijing, China) Yong Xue (Institute of Remote Sensing Applications, Chinese Academy of Sciences, Beijing, China) Ning Zhong (Maebashi Institute of Technology, USA) Hai Zhuge (Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China)

VIII

Organization

Local Arrangements Committee Weixuan Xu, Chair Yong Shi, Co-chair of events Benfu Lu, Co-chair of publicity Hongjin Yang, Secretary Jianping Li, Member Ying Liu, Member Jing He, Member Siliang Chen, Member Guanxiong Jiang, Member Nan Xiao, Member Zujin Deng, Member

Sponsoring Institutions AMD Springer World Scientific Publlishing University of Nebraska at Omaha, USA Graduate University of Chinese Academy of Sciences Institute of Policy and Management, Chinese Academy of Sciences Universiteit van Amsterdam

Program Committee J.H. Abawajy, Deakin University, Australia D. Abramson, Monash University, Australia V. Alexandrov, University of Reading, UK I. Altintas, San Diego Supercomputer Center, UCSD M. Antolovich, Charles Sturt University, Australia E. Araujo, Universidade Federal de Campina Grande, Brazil M.A. Baker, University of Reading, UK B. Balis, Krakow University of Science and Technology, Poland A. Benoit, LIP, ENS Lyon, France I. Bethke, University of Amsterdam, The Netherlands J.A.R. Blais, University of Calgary, Canada I. Brandic, University of Vienna, Austria J. Broeckhove, Universiteit Antwerpen, Belgium M. Bubak, AGH University of Science and Technology, Poland K. Bubendorfer, Victoria University of Wellington, Australia B. Cantalupo, DATAMAT S.P.A, Italy J. Chen Swinburne, University of Technology, Australia O. Corcho, University of Manchester, UK J.C. Cunha, Univ. Nova de Lisboa, Portugal

Organization

S. Date, Osaka University, Japan F. Desprez, INRIA, France T. Dhaene, University of Antwerp, Belgium I.T. Dimov, ACET, The University of Reading, UK J. Dongarra, University of Tennessee, USA F. Donno, CERN, Switzerland C. Douglas, University of Kentucky, USA G. Fox, Indiana University, USA W. Funika, Krakow University of Science and Technology, Poland H.J. Gardner, Australian National University, Australia G. Geethakumari, University of Hyderabad, India Y. Gorbachev, St. Petersburg State Polytechnical University, Russia A.M. Goscinski, Deakin University, Australia M. Govindaraju, Binghamton University, USA G.A. Gravvanis, Democritus University of Thrace, Greece D.J. Groen, University of Amsterdam, The Netherlands T. Gubala, ACC CYFRONET AGH, Krakow, Poland M. Hardt, FZK, Germany T. Heinis, ETH Zurich, Switzerland L. Hluchy, Institute of Informatics, Slovak Academy of Sciences, Slovakia A.G. Hoekstra, University of Amsterdam, The Netherlands W. Hoffmann, University of Amsterdam, The Netherlands C. Huang, Beijing Normal University Beijing, China M. Humphrey, University of Virginia, USA A. Iglesias, University of Cantabria, Spain H. Jin, Huazhong University of Science and Technology, China D. Johnson, ACET Centre, University of Reading, UK B.D. Kandhai, University of Amsterdam, The Netherlands S. Kawata, Utsunomiya University, Japan W.A. Kelly, Queensland University of Technology, Australia J. Kitowski, Inst.Comp.Sci. AGH-UST, Cracow, Poland M. Koda, University of Tsukuba Japan D. Kranzlm¨ uller, GUP, Joh. Kepler University Linz, Austria B. Kryza, Academic Computer Centre CYFRONET-AGH, Cracow, Poland M. Kunze, Forschungszentrum Karlsruhe (FZK), Germany D. Kurzyniec, Emory University, Atlanta, USA A. Lagana, University of Perugia, Italy J. Lee, KISTI Supercomputing Center, Korea C. Lee, Aerospace Corp., USA L. Lefevre, INRIA, France A. Lewis, Griffith University, Australia H.W. Lim, Royal Holloway, University of London, UK A. Lin, NCMIR/UCSD, USA P. Lu, University of Alberta, Canada M. Malawski, Institute of Computer Science AGH, Poland

IX

X

Organization

M. Mascagni, Florida State University, USA V. Maxville, Curtin Business School, Australia A.S. McGough, London e-Science Centre, UK E.D. Moreno, UEA-BENq, Manaus, Brazil J.T. Moscicki, Cern, Switzerland S. Naqvi, CoreGRID Network of Excellence, France P.O.A. Navaux, Universidade Federal do Rio Grande do Sul, Brazil Z. Nemeth, Computer and Automation Research Institute, Hungarian Academy of Science, Hungary J. Ni, University of Iowa, USA G. Norman, Joint Institute for High Temperatures of RAS, Russia ´ Nuall´ B. O ain, University of Amsterdam, The Netherlands C.W. Oosterlee, Centrum voor Wiskunde en Informatica, CWI, The Netherlands S. Orlando, Universit` a Ca’ Foscari, Venice, Italy M. Paprzycki, IBS PAN and SWPS, Poland M. Parashar, Rutgers University, USA L.M. Patnaik, Indian Institute of Science, India C.P. Pautasso, ETH Z¨ urich, Switzerland R. Perrott, Queen’s University, Belfast, UK V. Prasanna, University of Southern California, USA T. Priol, IRISA, France M.R. Radecki, Krakow University of Science and Technology, Poland M. Ram, C-DAC Bangalore Centre, India A. Rendell, Australian National University, Australia P. Rhodes, University of Mississippi, USA M. Riedel, Research Centre Juelich, Germany D. Rodr´ıguez Garc´ıa, University of Alcal´ a, Spain K. Rycerz, Krakow University of Science and Technology, Poland R. Santinelli, CERN, Switzerland J. Schneider, Technische Universit¨ at Berlin, Germany B. Schulze, LNCC, Brazil J. Seo, The University of Manchester, UK Y. Shi, Chinese Academy of Sciences, Beijing, China D. Shires, U.S. Army Research Laboratory, USA A.E. Solomonides, University of the West of England, Bristol, UK V. Stankovski, University of Ljubljana, Slovenia H. Stockinger, Swiss Institute of Bioinformatics, Switzerland A. Streit, Forschungszentrum J¨ ulich, Germany H. Sun, Beihang University, China R. Tadeusiewicz, AGH University of Science and Technology, Poland J. Tang, Purdue University USA M. Taufer, University of Texas El Paso, USA C. Tedeschi, LIP-ENS Lyon, France A. Thandavan, ACET Center, University of Reading, UK A. Tirado-Ramos, University of Amsterdam, The Netherlands

Organization

P. Tvrdik, Czech Technical University Prague, Czech Republic G.D. van Albada, Universiteit van Amsterdam, The Netherlands F. van Lingen, California Institute of Technology, USA J. Vigo-Aguiar, University of Salamanca, Spain D.W. Walker, Cardiff University, UK C.L. Wang, University of Hong Kong, China A.L. Wendelborn, University of Adelaide, Australia Y. Xue, Chinese Academy of Sciences, China L.T. Yang, St. Francis Xavier University, Canada C.T. Yang, Tunghai University, Taichung, Taiwan J. Yu, The University of Melbourne, Australia Y. Zheng, Zhejiang University, China W. Zheng, Tsinghua University, Beijing, China L. Zhu, University of Florida, USA A. Zomaya, The University of Sydney, Australia E.V. Zudilova-Seinstra, University of Amsterdam, The Netherlands

Reviewers J.H. Abawajy D. Abramson A. Abran P. Adriaans W. Ahn R. Akbani K. Akkaya R. Albert M. Aldinucci V.N. Alexandrov B. Alidaee I. Altintas K. Altmanninger S. Aluru S. Ambroszkiewicz L. Anido K. Anjyo C. Anthes M. Antolovich S. Antoniotti G. Antoniu H. Arabnia E. Araujo E. Ardeleanu J. Aroba J. Astalos

B. Autin M. Babik G. Bai E. Baker M.A. Baker S. Balfe B. Balis W. Banzhaf D. Bastola S. Battiato M. Baumgarten M. Baumgartner P. Beckaert A. Belloum O. Belmonte A. Belyaev A. Benoit G. Bergantz J. Bernsdorf J. Berthold I. Bethke I. Bhana R. Bhowmik M. Bickelhaupt J. Bin Shyan J. Birkett

J.A.R. Blais A. Bode B. Boghosian S. Bolboaca C. Bothorel A. Bouteiller I. Brandic S. Branford S.J. Branford R. Braungarten R. Briggs J. Broeckhove W. Bronsvoort A. Bruce C. Brugha Y. Bu K. Bubendorfer I. Budinska G. Buemi B. Bui H.J. Bungartz A. Byrski M. Cai Y. Cai Y.Q. Cai Z.Y. Cai

XI

XII

Organization

B. Cantalupo K. Cao M. Cao F. Capkovic A. Cepulkauskas K. Cetnarowicz Y. Chai P. Chan G.-L. Chang S.C. Chang W.A. Chaovalitwongse P.K. Chattaraj C.-K. Chen E. Chen G.Q. Chen G.X. Chen J. Chen J. Chen J.J. Chen K. Chen Q.S. Chen W. Chen Y. Chen Y.Y. Chen Z. Chen G. Cheng X.Z. Cheng S. Chiu K.E. Cho Y.-Y. Cho B. Choi J.K. Choi D. Choinski D.P. Chong B. Chopard M. Chover I. Chung M. Ciglan B. Cogan G. Cong J. Corander J.C. Corchado O. Corcho J. Cornil H. Cota de Freitas

E. Coutinho J.J. Cuadrado-Gallego Y.F. Cui J.C. Cunha V. Curcin A. Curioni R. da Rosa Righi S. Dalai M. Daneva S. Date P. Dazzi S. de Marchi V. Debelov E. Deelman J. Della Dora Y. Demazeau Y. Demchenko H. Deng X.T. Deng Y. Deng M. Mat Deris F. Desprez M. Dewar T. Dhaene Z.R. Di G. di Biasi A. Diaz Guilera P. Didier I.T. Dimov L. Ding G.D. Dobrowolski T. Dokken J.J. Dolado W. Dong Y.-L. Dong J. Dongarra F. Donno C. Douglas G.J. Garcke R.P. Mundani R. Drezewski D. Du B. Duan J.F. Dufourd H. Dun

C. Earley P. Edmond T. Eitrich A. El Rhalibi T. Ernst V. Ervin D. Estrin L. Eyraud-Dubois J. Falcou H. Fang Y. Fang X. Fei Y. Fei R. Feng M. Fernandez K. Fisher C. Fittschen G. Fox F. Freitas T. Friesz K. Fuerlinger M. Fujimoto T. Fujinami W. Funika T. Furumura A. Galvez L.J. Gao X.S. Gao J.E. Garcia H.J. Gardner M. Garre G. Garsva F. Gava G. Geethakumari M. Geimer J. Geiser J.-P. Gelas A. Gerbessiotis M. Gerndt S. Gimelshein S.G. Girdzijauskas S. Girtelschmid Z. Gj C. Glasner A. Goderis

Organization

D. Godoy J. Golebiowski S. Gopalakrishnan Y. Gorbachev A.M. Goscinski M. Govindaraju E. Grabska G.A. Gravvanis C.H. Grelck D.J. Groen L. Gross P. Gruer A. Grzech J.F. Gu Y. Guang Xue T. Gubala V. Guevara-Masis C.H. Guo X. Guo Z.Q. Guo L. Guohui C. Gupta I. Gutman A. Haffegee K. Han M. Hardt A. Hasson J. He J. He K. He T. He J. He M.R. Head P. Heinzlreiter H. Chojnacki J. Heo S. Hirokawa G. Hliniak L. Hluchy T.B. Ho A. Hoekstra W. Hoffmann A. Hoheisel J. Hong Z. Hong

D. Horvath F. Hu L. Hu X. Hu X.H. Hu Z. Hu K. Hua H.W. Huang K.-Y. Huang L. Huang L. Huang M.S. Huang S. Huang T. Huang W. Huang Y. Huang Z. Huang Z. Huang B. Huber E. Hubo J. Hulliger M. Hultell M. Humphrey P. Hurtado J. Huysmans T. Ida A. Iglesias K. Iqbal D. Ireland N. Ishizawa I. Lukovits R. Jamieson J.K. Jan P. Janderka M. Jankowski L. J¨ antschi S.J.K. Jensen N.J. Jeon T.H. Jeon T. Jeong H. Ji X. Ji D.Y. Jia C. Jiang H. Jiang

XIII

M.J. Jiang P. Jiang W. Jiang Y. Jiang H. Jin J. Jin L. Jingling G.-S. Jo D. Johnson J. Johnstone J.J. Jung K. Juszczyszyn J.A. Kaandorp M. Kabelac B. Kadlec R. Kakkar C. Kameyama B.D. Kandhai S. Kandl K. Kang S. Kato S. Kawata T. Kegl W.A. Kelly J. Kennedy G. Khan J.B. Kido C.H. Kim D.S. Kim D.W. Kim H. Kim J.G. Kim J.H. Kim M. Kim T.H. Kim T.W. Kim P. Kiprof R. Kirner M. Kisiel-Dorohinicki J. Kitowski C.R. Kleijn M. Kluge upfer A. Kn¨ I.S. Ko Y. Ko

XIV

Organization

R. Kobler B. Koblitz G.A. Kochenberger M. Koda T. Koeckerbauer M. Koehler I. Kolingerova V. Korkhov T. Korkmaz L. Kotulski G. Kou J. Kozlak M. Krafczyk D. Kranzlm¨ uller B. Kryza V.V. Krzhizhanovskaya M. Kunze D. Kurzyniec E. Kusmierek S. Kwang Y. Kwok F. Kyriakopoulos H. Labiod A. Lagana H. Lai S. Lai Z. Lan G. Le Mahec B.G. Lee C. Lee H.K. Lee J. Lee J. Lee J.H. Lee S. Lee S.Y. Lee V. Lee Y.H. Lee L. Lefevre L. Lei F. Lelj A. Lesar D. Lesthaeghe Z. Levnajic A. Lewis

A. Li D. Li D. Li E. Li J. Li J. Li J.P. Li M. Li P. Li X. Li X.M. Li X.S. Li Y. Li Y. Li J. Liang L. Liang W.K. Liao X.F. Liao G.G. Lim H.W. Lim S. Lim A. Lin I.C. Lin I-C. Lin Y. Lin Z. Lin P. Lingras C.Y. Liu D. Liu D.S. Liu E.L. Liu F. Liu G. Liu H.L. Liu J. Liu J.C. Liu R. Liu S.Y. Liu W.B. Liu X. Liu Y. Liu Y. Liu Y. Liu Y. Liu Y.J. Liu

Y.Z. Liu Z.J. Liu S.-C. Lo R. Loogen B. L´opez A. L´ opez Garc´ıa de Lomana F. Loulergue G. Lu J. Lu J.H. Lu M. Lu P. Lu S. Lu X. Lu Y.C. Lu C. Lursinsap L. Ma M. Ma T. Ma A. Macedo N. Maillard M. Malawski S. Maniccam S.S. Manna Z.M. Mao M. Mascagni E. Mathias R.C. Maurya V. Maxville A.S. McGough R. Mckay T.-G. MCKenzie K. Meenal R. Mehrotra M. Meneghin F. Meng M.F.J. Meng E. Merkevicius M. Metzger Z. Michalewicz J. Michopoulos J.-C. Mignot R. mikusauskas H.Y. Ming

Organization

G. Miranda Valladares M. Mirua G.P. Miscione C. Miyaji A. Miyoshi J. Monterde E.D. Moreno G. Morra J.T. Moscicki H. Moshkovich V.M. Moskaliova G. Mounie C. Mu A. Muraru H. Na K. Nakajima Y. Nakamori S. Naqvi S. Naqvi R. Narayanan A. Narjess A. Nasri P. Navaux P.O.A. Navaux M. Negoita Z. Nemeth L. Neumann N.T. Nguyen J. Ni Q. Ni K. Nie G. Nikishkov V. Nitica W. Nocon A. Noel G. Norman ´ Nuall´ B. O ain N. O’Boyle J.T. Oden Y. Ohsawa H. Okuda D.L. Olson C.W. Oosterlee V. Oravec S. Orlando

F.R. Ornellas A. Ortiz S. Ouyang T. Owens S. Oyama B. Ozisikyilmaz A. Padmanabhan Z. Pan Y. Papegay M. Paprzycki M. Parashar K. Park M. Park S. Park S.K. Pati M. Pauley C.P. Pautasso B. Payne T.C. Peachey S. Pelagatti F.L. Peng Q. Peng Y. Peng N. Petford A.D. Pimentel W.A.P. Pinheiro J. Pisharath G. Pitel D. Plemenos S. Pllana S. Ploux A. Podoleanu M. Polak D. Prabu B.B. Prahalada Rao V. Prasanna P. Praxmarer V.B. Priezzhev T. Priol T. Prokosch G. Pucciani D. Puja P. Puschner L. Qi D. Qin

H. Qin K. Qin R.X. Qin X. Qin G. Qiu X. Qiu J.Q. Quinqueton M.R. Radecki S. Radhakrishnan S. Radharkrishnan M. Ram S. Ramakrishnan P.R. Ramasami P. Ramsamy K.R. Rao N. Ratnakar T. Recio K. Regenauer-Lieb R. Rejas F.Y. Ren A. Rendell P. Rhodes J. Ribelles M. Riedel R. Rioboo Y. Robert G.J. Rodgers A.S. Rodionov D. Rodr´ıguez Garc´ıa C. Rodriguez Leon F. Rogier G. Rojek L.L. Rong H. Ronghuai H. Rosmanith F.-X. Roux R.K. Roy U. R¨ ude M. Ruiz T. Ruofeng K. Rycerz M. Ryoke F. Safaei T. Saito V. Sakalauskas

XV

XVI

Organization

L. Santillo R. Santinelli K. Sarac H. Sarafian M. Sarfraz V.S. Savchenko M. Sbert R. Schaefer D. Schmid J. Schneider M. Schoeberl S.-B. Scholz B. Schulze S.R. Seelam B. Seetharamanjaneyalu J. Seo K.D. Seo Y. Seo O.A. Serra A. Sfarti H. Shao X.J. Shao F.T. Sheldon H.Z. Shen S.L. Shen Z.H. Sheng H. Shi Y. Shi S. Shin S.Y. Shin B. Shirazi D. Shires E. Shook Z.S. Shuai M.A. Sicilia M. Simeonidis K. Singh M. Siqueira W. Sit M. Skomorowski A. Skowron P.M.A. Sloot M. Smolka B.S. Sniezynski H.Z. Sojka

A.E. Solomonides C. Song L.J. Song S. Song W. Song J. Soto A. Sourin R. Srinivasan V. Srovnal V. Stankovski P. Sterian H. Stockinger D. Stokic A. Streit B. Strug P. Stuedi A. St¨ umpel S. Su V. Subramanian P. Suganthan D.A. Sun H. Sun S. Sun Y.H. Sun Z.G. Sun M. Suvakov H. Suzuki D. Szczerba L. Szecsi L. Szirmay-Kalos R. Tadeusiewicz B. Tadic T. Takahashi S. Takeda J. Tan H.J. Tang J. Tang S. Tang T. Tang X.J. Tang J. Tao M. Taufer S.F. Tayyari C. Tedeschi J.C. Teixeira

F. Terpstra C. Te-Yi A.Y. Teymorian D. Thalmann A. Thandavan L. Thompson S. Thurner F.Z. Tian Y. Tian Z. Tianshu A. Tirado-Ramos A. Tirumala P. Tjeerd W. Tong A.S. Tosun A. Tropsha C. Troyer K.C.K. Tsang A.C. Tsipis I. Tsutomu A. Turan P. Tvrdik U. Ufuktepe V. Uskov B. Vaidya E. Valakevicius I.A. Valuev S. Valverde G.D. van Albada R. van der Sman F. van Lingen A.J.C. Varandas C. Varotsos D. Vasyunin R. Veloso J. Vigo-Aguiar J. Vill` a i Freixa V. Vivacqua E. Vumar R. Walentkynski D.W. Walker B. Wang C.L. Wang D.F. Wang D.H. Wang

Organization

F. Wang F.L. Wang H. Wang H.G. Wang H.W. Wang J. Wang J. Wang J. Wang J. Wang J.H. Wang K. Wang L. Wang M. Wang M.Z. Wang Q. Wang Q.Q. Wang S.P. Wang T.K. Wang W. Wang W.D. Wang X. Wang X.J. Wang Y. Wang Y.Q. Wang Z. Wang Z.T. Wang A. Wei G.X. Wei Y.-M. Wei X. Weimin D. Weiskopf B. Wen A.L. Wendelborn I. Wenzel A. Wibisono A.P. Wierzbicki R. Wism¨ uller F. Wolf C. Wu C. Wu F. Wu G. Wu J.N. Wu X. Wu X.D. Wu

Y. Wu Z. Wu B. Wylie M. Xavier Py Y.M. Xi H. Xia H.X. Xia Z.R. Xiao C.F. Xie J. Xie Q.W. Xie H. Xing H.L. Xing J. Xing K. Xing L. Xiong M. Xiong S. Xiong Y.Q. Xiong C. Xu C.-H. Xu J. Xu M.W. Xu Y. Xu G. Xue Y. Xue Z. Xue A. Yacizi B. Yan N. Yan N. Yan W. Yan H. Yanami C.T. Yang F.P. Yang J.M. Yang K. Yang L.T. Yang L.T. Yang P. Yang X. Yang Z. Yang W. Yanwen S. Yarasi D.K.Y. Yau

XVII

P.-W. Yau M.J. Ye G. Yen R. Yi Z. Yi J.G. Yim L. Yin W. Yin Y. Ying S. Yoo T. Yoshino W. Youmei Y.K. Young-Kyu Han J. Yu J. Yu L. Yu Z. Yu Z. Yu W. Yu Lung X.Y. Yuan W. Yue Z.Q. Yue D. Yuen T. Yuizono J. Zambreno P. Zarzycki M.A. Zatevakhin S. Zeng A. Zhang C. Zhang D. Zhang D.L. Zhang D.Z. Zhang G. Zhang H. Zhang H.R. Zhang H.W. Zhang J. Zhang J.J. Zhang L.L. Zhang M. Zhang N. Zhang P. Zhang P.Z. Zhang Q. Zhang

XVIII

Organization

S. Zhang W. Zhang W. Zhang Y.G. Zhang Y.X. Zhang Z. Zhang Z.W. Zhang C. Zhao H. Zhao H.K. Zhao H.P. Zhao J. Zhao M.H. Zhao W. Zhao

Z. Zhao L. Zhen B. Zheng G. Zheng W. Zheng Y. Zheng W. Zhenghong P. Zhigeng W. Zhihai Y. Zhixia A. Zhmakin C. Zhong X. Zhong K.J. Zhou

L.G. Zhou X.J. Zhou X.L. Zhou Y.T. Zhou H.H. Zhu H.L. Zhu L. Zhu X.Z. Zhu Z. Zhu M. Zhu. J. Zivkovic A. Zomaya E.V. Zudilova-Seinstra

Workshop Organizers Sixth International Workshop on Computer Graphics and Geometric Modelling A. Iglesias, University of Cantabria, Spain Fifth International Workshop on Computer Algebra Systems and Applications A. Iglesias, University of Cantabria, Spain, A. Galvez, University of Cantabria, Spain PAPP 2007 - Practical Aspects of High-Level Parallel Programming (4th International Workshop) A. Benoit, ENS Lyon, France F. Loulerge, LIFO, Orlans, France International Workshop on Collective Intelligence for Semantic and Knowledge Grid (CISKGrid 2007) N.T. Nguyen, Wroclaw University of Technology, Poland J.J. Jung, INRIA Rhˆ one-Alpes, France K. Juszczyszyn, Wroclaw University of Technology, Poland Simulation of Multiphysics Multiscale Systems, 4th International Workshop V.V. Krzhizhanovskaya, Section Computational Science, University of Amsterdam, The Netherlands A.G. Hoekstra, Section Computational Science, University of Amsterdam, The Netherlands

Organization

XIX

S. Sun, Clemson University, USA J. Geiser, Humboldt University of Berlin, Germany 2nd Workshop on Computational Chemistry and Its Applications (2nd CCA) P.R. Ramasami, University of Mauritius Efficient Data Management for HPC Simulation Applications R.-P. Mundani, Technische Universit¨ at M¨ unchen, Germany J. Abawajy, Deakin University, Australia M. Mat Deris, Tun Hussein Onn College University of Technology, Malaysia Real Time Systems and Adaptive Applications (RTSAA-2007) J. Hong, Soongsil University, South Korea T. Kuo, National Taiwan University, Taiwan The International Workshop on Teaching Computational Science (WTCS 2007) L. Qi, Department of Information and Technology, Central China Normal University, China W. Yanwen, Department of Information and Technology, Central China Normal University, China W. Zhenghong, East China Normal University, School of Information Science and Technology, China GeoComputation Y. Xue, IRSA, China Risk Analysis C.F. Huang, Beijing Normal University, China Advanced Computational Approaches and IT Techniques in Bioinformatics M.A. Pauley, University of Nebraska at Omaha, USA H.A. Ali, University of Nebraska at Omaha, USA Workshop on Computational Finance and Business Intelligence Y. Shi, Chinese Acedemy of Scienes, China S.Y. Wang, Academy of Mathematical and System Sciences, Chinese Academy of Sciences, China X.T. Deng, Department of Computer Science, City University of Hong Kong, China

XX

Organization

Collaborative and Cooperative Environments C. Anthes, Institute of Graphics and Parallel Processing, JKU, Austria V.N. Alexandrov, ACET Centre, The University of Reading, UK D. Kranzlm¨ uller, Institute of Graphics and Parallel Processing, JKU, Austria J. Volkert, Institute of Graphics and Parallel Processing, JKU, Austria Tools for Program Development and Analysis in Computational Science A. Kn¨ upfer, ZIH, TU Dresden, Germany A. Bode, TU Munich, Germany D. Kranzlm¨ uller, Institute of Graphics and Parallel Processing, JKU, Austria J. Tao, CAPP, University of Karlsruhe, Germany R. Wissm¨ uller FB12, BSVS, University of Siegen, Germany J. Volkert, Institute of Graphics and Parallel Processing, JKU, Austria Workshop on Mining Text, Semi-structured, Web or Multimedia Data (WMTSWMD 2007) G. Kou, Thomson Corporation, R&D, USA Y. Peng, Omnium Worldwide, Inc., USA J.P. Li, Institute of Policy and Management, Chinese Academy of Sciences, China 2007 International Workshop on Graph Theory, Algorithms and Its Applications in Computer Science (IWGA 2007) M. Li, Dalian University of Technology, China 2nd International Workshop on Workflow Systems in e-Science (WSES 2007) Z. Zhao, University of Amsterdam, The Netherlands A. Belloum, University of Amsterdam, The Netherlands 2nd International Workshop on Internet Computing in Science and Engineering (ICSE 2007) J. Ni, The University of Iowa, USA Workshop on Evolutionary Algorithms and Evolvable Systems (EAES 2007) B. Zheng, College of Computer Science, South-Central University for Nationalities, Wuhan, China Y. Li, State Key Lab. of Software Engineering, Wuhan University, Wuhan, China J. Wang, College of Computer Science, South-Central University for Nationalities, Wuhan, China L. Ding, State Key Lab. of Software Engineering, Wuhan University, Wuhan, China

Organization

XXI

Wireless and Mobile Systems 2007 (WMS 2007) H. Choo, Sungkyunkwan University, South Korea WAFTS: WAvelets, FracTals, Short-Range Phenomena — Computational Aspects and Applications C. Cattani, University of Salerno, Italy C. Toma, Polythecnica, Bucharest, Romania Dynamic Data-Driven Application Systems - DDDAS 2007 F. Darema, National Science Foundation, USA The Seventh International Workshop on Meta-synthesis and Complex Systems (MCS 2007) X.J. Tang, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, China J.F. Gu, Institute of Systems Science, Chinese Academy of Sciences, China Y. Nakamori, Japan Advanced Institute of Science and Technology, Japan H.C. Wang, Shanghai Jiaotong University, China The 1st International Workshop on Computational Methods in Energy Economics L. Yu, City University of Hong Kong, China J. Li, Chinese Academy of Sciences, China D. Qin, Guangdong Provincial Development and Reform Commission, China High-Performance Data Mining Y. Liu, Data Technology and Knowledge Economy Research Center, Chinese Academy of Sciences, China A. Choudhary, Electrical and Computer Engineering Department, Northwestern University, USA S. Chiu, Department of Computer Science, College of Engineering, Idaho State University, USA Computational Linguistics in Human–Computer Interaction H. Ji, Sungkyunkwan University, South Korea Y. Seo, Chungbuk National University, South Korea H. Choo, Sungkyunkwan University, South Korea Intelligent Agents in Computing Systems K. Cetnarowicz, Department of Computer Science, AGH University of Science and Technology, Poland R. Schaefer, Department of Computer Science, AGH University of Science and Technology, Poland

XXII

Organization

Networks: Theory and Applications B. Tadic, Jozef Stefan Institute, Ljubljana, Slovenia S. Thurner, COSY, Medical University Vienna, Austria Workshop on Computational Science in Software Engineering D. Rodrguez, University of Alcala, Spain J.J. Cuadrado-Gallego, University of Alcala, Spain International Workshop on Advances in Computational Geomechanics and Geophysics (IACGG 2007) H.L. Xing, The University of Queensland and ACcESS Major National Research Facility, Australia J.H. Wang, Shanghai Jiao Tong University, China 2nd International Workshop on Evolution Toward Next-Generation Internet (ENGI) Y. Cui, Tsinghua University, China Parallel Monte Carlo Algorithms for Diverse Applications in a Distributed Setting V.N. Alexandrov, ACET Centre, The University of Reading, UK The 2007 Workshop on Scientific Computing in Electronics Engineering (WSCEE 2007) Y. Li, National Chiao Tung University, Taiwan High-Performance Networked Media and Services 2007 (HiNMS 2007) I.S. Ko, Dongguk University, South Korea Y.J. Na, Honam University, South Korea

Table of Contents – Part I

A Composite Finite Element-Finite Difference Model Applied to Turbulence Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ˙ Lale Balas and Asu Inan

1

Vortex Identification in the Wall Region of Turbulent Channel Flow . . . . Giancarlo Alfonsi and Leonardo Primavera

9

Numerical Solution of a Two-Class LWR Traffic Flow Model by High-Resolution Central-Upwind Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jianzhong Chen, Zhongke Shi, and Yanmei Hu

17

User-Controllable GPGPU-Based Target-Driven Smoke Simulation . . . . . Jihyun Ryu and Sanghun Park

25

Variable Relaxation Solve for Nonlinear Thermal Conduction . . . . . . . . . Jin Chen

30

A Moving Boundary Wave Run-Up Model . . . . . . . . . . . . . . . . . . . . . . . . . . ˙ Asu Inan and Lale Balas

38

Enabling Very-Large Scale Earthquake Simulations on Parallel Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yifeng Cui, Reagan Moore, Kim Olsen, Amit Chourasia, Philip Maechling, Bernard Minster, Steven Day, Yuanfang Hu, Jing Zhu, Amitava Majumdar, and Thomas Jordan

46

Fast Insolation Computation in Large Territories . . . . . . . . . . . . . . . . . . . . . Siham Tabik, Jes´ us M. V´ıas, Emilio L. Zapata, and Luis F. Romero

54

Non-equilibrium Thermodynamics, Thermomechanics, Geodynamics . . . . Klaus Regenauer-Lieb, Bruce Hobbs, Alison Ord, and Dave A. Yuen

62

A Finite Element Model for Epidermal Wound Healing . . . . . . . . . . . . . . . F.J. Vermolen and J.A. Adam

70

Predicting Binding Sites of Hepatitis C Virus Complexes Using Residue Binding Propensity and Sequence Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . Guang-Zheng Zhang, Chirag Nepal, and Kyungsook Han

78

Use of Parallel Simulated Annealing for Computational Modeling of Human Head Conductivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adnan Salman, Allen Malony, Sergei Turovets, and Don Tucker

86

XXIV

Table of Contents – Part I

Mining Molecular Structure Data for the Patterns of Interactions Between Protein and RNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kyungsook Han and Chirag Nepal

94

Detecting Periodically Expression in Unevenly Spaced Microarray Time Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jun Xian, Jinping Wang, and Dao-Qing Dai

102

Creating Individual Based Models of the Plankton Ecosystem . . . . . . . . . Wes Hinsley, Tony Field, and John Woods

111

A Hybrid Agent-Based Model of Chemotaxis . . . . . . . . . . . . . . . . . . . . . . . . Zaiyi Guo and Joc Cing Tay

119

Block-Based Approach to Solving Linear Systems . . . . . . . . . . . . . . . . . . . . Sunil R. Tiyyagura and Uwe K¨ uster

128

Numerical Tests with Gauss-Type Nested Implicit Runge-Kutta Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gennady Yu. Kulikov and Sergey K. Shindin

136

An Efficient Implementation of the Thomas-Algorithm for Block Penta-diagonal Systems on Vector Computers . . . . . . . . . . . . . . . . . . . . . . . Katharina Benkert and Rudolf Fischer

144

Compatibility of Scalapack with the Discrete Wavelet Transform . . . . . . . Liesner Acevedo, Victor M. Garcia, and Antonio M. Vidal

152

A Model for Representing Topological Relations Between Simple Concave Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jihong Ou Yang, Qian Fu, and Dayou Liu

160

Speech Emotion Recognition Based on a Fusion of All-Class and Pairwise-Class Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jia Liu, Chun Chen, Jiajun Bu, Mingyu You, and Jianhua Tao

168

Regularized Knowledge-Based Kernel Machine . . . . . . . . . . . . . . . . . . . . . . . Olutayo O. Oladunni and Theodore B. Trafalis

176

Three-Phase Inverse Design Stefan Problem . . . . . . . . . . . . . . . . . . . . . . . . . Damian Slota

184

Semi-supervised Clustering Using Incomplete Prior Knowledge . . . . . . . . . Chao Wang, Weijun Chen, Peipei Yin, and Jianmin Wang

192

Distributed Reasoning with Fuzzy Description Logics . . . . . . . . . . . . . . . . . Jianjiang Lu, Yanhui Li, Bo Zhou, Dazhou Kang, and Yafei Zhang

196

Table of Contents – Part I

XXV

Effective Pattern Similarity Match for Multidimensional Sequence Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seok-Lyong Lee and Deok-Hwan Kim

204

GPU-Accelerated Montgomery Exponentiation . . . . . . . . . . . . . . . . . . . . . . Sebastian Fleissner

213

Hierarchical-Matrix Preconditioners for Parabolic Optimal Control Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suely Oliveira and Fang Yang

221

Searching and Updating Metric Space Databases Using the Parallel EGNAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mauricio Marin, Roberto Uribe, and Ricardo Barrientos

229

An Efficient Algorithm and Its Parallelization for Computing PageRank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jonathan Qiao, Brittany Jones, and Stacy Thrall

237

A Query Index for Stream Data Using Interval Skip Lists Exploiting Locality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jun-Ki Min

245

Accelerating XML Structural Matching Using Suffix Bitmaps . . . . . . . . . . Feng Shao, Gang Chen, and Jinxiang Dong

253

Improving XML Querying with Maximal Frequent Query Patterns . . . . . Yijun Bei, Gang Chen, and Jinxiang Dong

261

A Logic-Based Approach to Mining Inductive Databases . . . . . . . . . . . . . . Hong-Cheu Liu, Jeffrey Xu Yu, John Zeleznikow, and Ying Guan

270

An Efficient Quantum-Behaved Particle Swarm Optimization for Multiprocessor Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaohong Kong, Jun Sun, Bin Ye, and Wenbo Xu

278

Toward Optimizing Particle-Simulation Systems . . . . . . . . . . . . . . . . . . . . . Hai Jiang, Hung-Chi Su, and Bin Zhang

286

A Modified Quantum-Behaved Particle Swarm Optimization . . . . . . . . . . Jun Sun, C.-H. Lai, Wenbo Xu, Yanrui Ding, and Zhilei Chai

294

Neural Networks for Predicting the Behavior of Preconditioned Iterative Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . America Holloway and Tzu-Yi Chen

302

On the Normal Boundary Intersection Method for Generation of Efficient Front . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pradyumn Kumar Shukla

310

XXVI

Table of Contents – Part I

An Improved Laplacian Smoothing Approach for Surface Meshes . . . . . . . Ligang Chen, Yao Zheng, Jianjun Chen, and Yi Liang

318

Red-Black Half-Sweep Iterative Method Using Triangle Finite Element Approximation for 2D Poisson Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Sulaiman, M. Othman, and M.K. Hasan

326

Optimizing Surface Triangulation Via Near Isometry with Reference Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiangmin Jiao, Narasimha R. Bayyana, and Hongyuan Zha

334

Efficient Adaptive Strategy for Solving Inverse Problems . . . . . . . . . . . . . . M. Paszy´ nski, B. Barabasz, and R. Schaefer

342

Topology Preserving Tetrahedral Decomposition of Trilinear Cell . . . . . . Bong-Soo Sohn

350

FITTING: A Portal to Fit Potential Energy Functionals to ab initio Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Leonardo Pacifici, Leonardo Arteconi, and Antonio Lagan` a

358

Impact of QoS on Replica Placement in Tree Networks . . . . . . . . . . . . . . . Anne Benoit, Veronika Rehn, and Yves Robert

366

Generating Traffic Time Series Based on Generalized Cauchy Process . . . Ming Li, S.C. Lim, and Huamin Feng

374

Reliable and Scalable State Management Using Migration of State Information in Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jongbae Moon, Hyungil Park, and Myungho Kim

382

Efficient and Reliable Execution of Legacy Codes Exposed as Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bartosz Bali´s, Marian Bubak, Kamil Sterna, and Adam Bemben

390

Provenance Provisioning in Mobile Agent-Based Distributed Job Workflow Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuhong Feng and Wentong Cai

398

EPLAS: An Epistemic Programming Language for All Scientists . . . . . . . Isao Takahashi, Shinsuke Nara, Yuichi Goto, and Jingde Cheng

406

Translation of Common Information Model to Web Ontology Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marta Majewska, Bartosz Kryza, and Jacek Kitowski

414

XML Based Semantic Data Grid Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hui Tan and Xinmeng Chen

418

Table of Contents – Part I XXVII

Communication-Aware Scheduling Algorithm Based on Heterogeneous Computing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Youlin Ruan, Gan Liu, Jianjun Han, and Qinghua Li

426

Macro Adjustment Based Task Scheduling in Hierarchical Grid Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peijie Huang, Hong Peng, and Xuezhen Li

430

DGSS: A Dependability Guided Job Scheduling System for Grid Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yongcai Tao, Hai Jin, and Xuanhua Shi

434

An Exact Algorithm for the Servers Allocation, Capacity and Flow Assignment Problem with Cost Criterion and Delay Constraint in Wide Area Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marcin Markowski and Andrzej Kasprzak Adaptive Divisible Load Model for Scheduling Data-Intensive Grid Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Othman, M. Abdullah, H. Ibrahim, and S. Subramaniam Providing Fault-Tolerance in Unreliable Grid Systems Through Adaptive Checkpointing and Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maria Chtepen, Filip H.A. Claeys, Bart Dhoedt, Filip De Turck, Peter A. Vanrolleghem, and Piet Demeester

442

446

454

A Machine-Learning Based Load Prediction Approach for Distributed Service-Oriented Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jun Wang, Yi Ren, Di Zheng, and Quan-Yuan Wu

462

A Balanced Resource Allocation and Overload Control Infrastructure for the Service Grid Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jun Wang, Yi Ren, Di Zheng, and Quan-Yuan Wu

466

Recognition and Optimization of Loop-Carried Stream Reusing of Scientific Computing Applications on the Stream Processor . . . . . . . . . . . Ying Zhang, Gen Li, and Xuejun Yang

474

A Scalable Parallel Software Volume Rendering Algorithm for Large-Scale Unstructured Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kangjian Wangc and Yao Zheng

482

Geometry-Driven Nonlinear Equation with an Accelerating Coupled Scheme for Image Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shujun Fu, Qiuqi Ruan, Chengpo Mu, and Wenqia Wang

490

A Graph Clustering Algorithm Based on Minimum and Normalized Cut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jiabing Wang, Hong Peng, Jingsong Hu, and Chuangxin Yang

497

XXVIII Table of Contents – Part I

A-PARM: Adaptive Division of Sub-cells in the PARM for Efficient Volume Ray Casting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sukhyun Lim and Byeong-Seok Shin

505

Inaccuracies of Shape Averaging Method Using Dynamic Time Warping for Time Series Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vit Niennattrakul and Chotirat Ann Ratanamahatana

513

An Algebraic Substructuring Method for High-Frequency Response Analysis of Micro-systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jin Hwan Ko and Zhaojun Bai

521

Multilevel Task Partition Algorithm for Parallel Simulation of Power System Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wei Xue and Shanxiang Qi

529

An Extended Implementation of the Great Deluge Algorithm for Course Timetabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paul McMullan

538

Cubage-Weight Balance Algorithm for the Scattered Goods Loading with Two Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liu Xiao-qun, Ma Shi-hua, and Li Qi

546

Modeling VaR in Crude Oil Market: A Multi Scale Nonlinear Ensemble Approach Incorporating Wavelet Analysis and ANN . . . . . . . . . . . . . . . . . . Kin Keung Lai, Kaijian He, and Jerome Yen

554

On the Assessment of Petroleum Corporation’s Sustainability Based on Linguistic Fuzzy Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Li-fan Zhang

562

A Multiagent Model for Supporting Tourism Policy-Making by Market Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arnaldo Cecchini and Giuseppe A. Trunfio

567

An Improved Chaos-Based Image Encryption Scheme . . . . . . . . . . . . . . . . . Chong Fu, Zhen-chuan Zhang, Ying Chen, and Xing-wei Wang

575

A Factory Pattern in Fortran 95 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viktor K. Decyk and Henry J. Gardner

583

Mapping Pipeline Skeletons onto Heterogeneous Platforms . . . . . . . . . . . . Anne Benoit and Yves Robert

591

On the Optimal Object-Oriented Program Re-modularization . . . . . . . . . Saeed Parsa and Omid Bushehrian

599

Table of Contents – Part I

A Buffered-Mode MPI Implementation for the Cell BETM Processor . . . . Arun Kumar, Ganapathy Senthilkumar, Murali Krishna, Naresh Jayam, Pallav K. Baruah, Raghunath Sharma, Ashok Srinivasan, and Shakti Kapoor

XXIX

603

Implementation of the Parallel Superposition in Bulk-Synchronous Parallel ML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fr´ed´eric Gava

611

Parallelization of Generic Libraries Based on Type Properties . . . . . . . . . Prabhanjan Kambadur, Douglas Gregor, and Andrew Lumsdaine

620

Traffic Routing Through Off-Line LSP Creation . . . . . . . . . . . . . . . . . . . . . Srecko Krile and Djuro Kuzumilovic

628

Simulating Trust Overlay in P2P Networks . . . . . . . . . . . . . . . . . . . . . . . . . . Yan Zhang, Wei Wang, and Shunying L¨ u

632

Detecting Shrew HTTP Flood Attacks for Flash Crowds . . . . . . . . . . . . . . Yi Xie and Shun-Zheng Yu

640

A New Fault-Tolerant Routing Algorithm for m-ary n-cube Multi-computers and Its Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . Liu Hongmei

648

CARP : Context-Aware Resource Provisioning for Multimedia over 4G Wireless Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Navrati Saxena, Abhishek Roy, and Jitae Shin

652

Improved Fast Handovers for Mobile IPv6 over IEEE 802.16e Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sukyoung Ahn and Youngsong Mun

660

Advanced Bounded Shortest Multicast Algorithm for Delay Constrained Minimum Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Moonseong Kim, Gunu Jho, and Hyunseung Choo

668

Efficient Deadlock Detection in Parallel Computer Systems with Wormhole Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Soojung Lee

676

Type-Based Query Expansion for Sentence Retrieval . . . . . . . . . . . . . . . . . Keke Cai, Chun Chen, Jiajun Bu, and Guang Qiu

684

An Extended R-Tree Indexing Method Using Selective Prefetching in Main Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hong-Koo Kang, Joung-Joon Kim, Dong-Oh Kim, and Ki-Joon Han

692

XXX

Table of Contents – Part I

Single Data Copying for MPI Communication Optimization on Shared Memory System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qiankun Miao, Guangzhong Sun, Jiulong Shan, and Guoliang Chen

700

Adaptive Sparse Grid Classification Using Grid Environments . . . . . . . . . Dirk Pfl¨ uger, Ioan Lucian Muntean, and Hans-Joachim Bungartz

708

Latency-Optimized Parallelization of the FMM Near-Field Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ivo Kabadshow and Bruno Lang

716

Efficient Generation of Parallel Quasirandom Faure Sequences Via Scrambling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hongmei Chi and Michael Mascagni

723

Complexity of Monte Carlo Algorithms for a Class of Integral Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ivan Dimov and Rayna Georgieva

731

Modeling of Carrier Transport in Nanowires . . . . . . . . . . . . . . . . . . . . . . . . . T. Gurov, E. Atanassov, M. Nedjalkov, and I. Dimov

739

Monte Carlo Numerical Treatment of Large Linear Algebra Problems . . . Ivan Dimov, Vassil Alexandrov, Rumyana Papancheva, and Christian Weihrauch

747

Simulation of Multiphysics Multiscale Systems: Introduction to the ICCS’2007 Workshop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Valeria V. Krzhizhanovskaya and Shuyu Sun

755

Simulating Weed Propagation Via Hierarchical, Patch-Based Cellular Automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adam G. Dunn and Jonathan D. Majer

762

A Multiscale, Cell-Based Framework for Modeling Cancer Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yi Jiang

770

Stochastic Modelling and Simulation of Coupled Autoregulated Oscillators in a Multicellular Environment: The her1/her7 Genes . . . . . . . Andr´e Leier, Kevin Burrage, and Pamela Burrage

778

Multiscale Modeling of Biopolymer Translocation Through a Nanopore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maria Fyta, Simone Melchionna, Efthimios Kaxiras, and Sauro Succi

786

Table of Contents – Part I

XXXI

Multi-physics and Multi-scale Modelling in Cardiovascular Physiology: Advanced User Methods for Simulation of Biological Systems with ANSYS/CFX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. D´ıaz-Zuccarini, D. Rafirou, D.R. Hose, P.V. Lawford, and A.J. Narracott

794

Lattice Boltzmann Simulation of Mixed Convection in a Driven Cavity Packed with Porous Medium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhenhua Chai, Zhaoli Guo, and Baochang Shi

802

Numerical Study of Cross Diffusion Effects on Double Diffusive Convection with Lattice Boltzmann Method . . . . . . . . . . . . . . . . . . . . . . . . . Xiaomei Yu, Zhaoli Guo, and Baochang Shi

810

Lattice Boltzmann Simulation of Some Nonlinear Complex Equations . . . Baochang Shi

818

A General Long-Time Molecular Dynamics Scheme in Atomistic Systems: Hyperdynamics in Entropy Dominated Systems . . . . . . . . . . . . . Xin Zhou and Yi Jiang

826

A New Constitutive Model for the Analysis of Semi-flexible Polymers with Internal Viscosity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jack Xiao-Dong Yang and Roderick V.N. Melnik

834

Coupled Navier-Stokes/DSMC Method for Transient and Steady-State Gas Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Giannandrea Abbate, Barend J. Thijsse, and Chris R. Kleijn

842

Multi-scale Simulations of Gas Flows with Unified Flow Solver . . . . . . . . . V.V. Aristov, A.A. Frolova, S.A. Zabelok, V.I. Kolobov, and R.R. Arslanbekov Coupling Atomistic and Continuum Models for Multi-scale Simulations of Gas Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vladimir Kolobov, Robert Arslanbekov, and Alex Vasenkov Modelling Macroscopic Phenomena with Cellular Automata and Parallel Genetic Algorithms: An Application to Lava Flows . . . . . . . . . . . Maria Vittoria Avolio, Donato D’Ambrosio, Salvatore Di Gregorio, Rocco Rongo, William Spataro, and Giuseppe A. Trunfio Acceleration of Preconditioned Krylov Solvers for Bubbly Flow Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.M. Tang and C. Vuik

850

858

866

874

XXXII Table of Contents – Part I

An Efficient Characteristic Method for the Magnetic Induction Equation with Various Resistivity Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jiangguo (James) Liu

882

Multiscale Discontinuous Galerkin Methods for Modeling Flow and Transport in Porous Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shuyu Sun and J¨ urgen Geiser

890

Fourier Spectral Solver for the Incompressible Navier-Stokes Equations with Volume-Penalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.H. Keetels, H.J.H. Clercx, and G.J.F. van Heijst

898

High Quality Surface Mesh Generation for Multi-physics Bio-medical Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dominik Szczerba, Robert McGregor, and G´ abor Sz´ekely

906

Macro-micro Interlocked Simulation for Multiscale Phenomena . . . . . . . . . Kanya Kusano, Shigenobu Hirose, Toru Sugiyama, Shinichiro Shima, Akio Kawano, and Hiroki Hasegawa Towards a Complex Automata Framework for Multi-scale Modeling: Formalism and the Scale Separation Map . . . . . . . . . . . . . . . . . . . . . . . . . . . Alfons G. Hoekstra, Eric Lorenz, Jean-Luc Falcone, and Bastien Chopard Multilingual Interfaces for Parallel Coupling in Multiphysics and Multiscale Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Everest T. Ong, J. Walter Larson, Boyana Norris, Robert L. Jacob, Michael Tobis, and Michael Steder On a New Isothermal Quantum Euler Model: Derivation, Asymptotic Analysis and Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pierre Degond, Samy Gallego, and Florian M´ehats Grate Furnace Combustion: A Submodel for the Solid Fuel Layer . . . . . . H.A.J.A. van Kuijk, R.J.M. Bastiaans, J.A. van Oijen, and L.P.H. de Goey

914

922

931

939

947

Introduction to the ICCS 2007 Workshop on Dynamic Data Driven Applications Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Frederica Darema

955

Pharmaceutical Informatics and the Pathway to Personalized Medicines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sangtae Kim and Venkat Venkatasubramanian

963

Table of Contents – Part I XXXIII

Towards Real-Time Distributed Signal Modeling for Brain-Machine Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jack DiGiovanna, Loris Marchal, Prapaporn Rattanatamrong, Ming Zhao, Shalom Darmanjian, Babak Mahmoudi, Justin C. Sanchez, Jos´e C. Pr´ıncipe, Linda Hermer-Vazquez, Renato Figueiredo, and Jos A.B. Fortes Using Cyber-Infrastructure for Dynamic Data Driven Laser Treatment of Cancer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Bajaj, J.T. Oden, K.R. Diller, J.C. Browne, J. Hazle, I. Babuˇska, J. Bass, L. Bidaut, L. Demkowicz, A. Elliott, Y. Feng, D. Fuentes, B. Kwon, S. Prudhomme, R.J. Stafford, and Y. Zhang Grid-Enabled Software Environment for Enhanced Dynamic Data-Driven Visualization and Navigation During Image-Guided Neurosurgery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nikos Chrisochoides, Andriy Fedorov, Andriy Kot, Neculai Archip, Daniel Goldberg-Zimring, Dan Kacher, Stephen Whalen, Ron Kikinis, Ferenc Jolesz, Olivier Clatz, Simon K. Warfield, Peter M. Black, and Alexandra Golby From Data Reverence to Data Relevance: Model-Mediated Wireless Sensing of the Physical Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paul G. Flikkema, Pankaj K. Agarwal, James S. Clark, Carla Ellis, Alan Gelfand, Kamesh Munagala, and Jun Yang AMBROSia: An Autonomous Model-Based Reactive Observing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David Caron, Abhimanyu Das, Amit Dhariwal, Leana Golubchik, Ramesh Govindan, David Kempe, Carl Oberg, Abhishek Sharma, Beth Stauffer, Gaurav Sukhatme, and Bin Zhang

964

972

980

988

995

Dynamically Identifying and Tracking Contaminants in Water Bodies . . . 1002 Craig C. Douglas, Martin J. Cole, Paul Dostert, Yalchin Efendiev, Richard E. Ewing, Gundolf Haase, Jay Hatcher, Mohamed Iskandarani, Chris R. Johnson, and Robert A. Lodder Hessian-Based Model Reduction for Large-Scale Data Assimilation Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1010 Omar Bashir, Omar Ghattas, Judith Hill, Bart van Bloemen Waanders, and Karen Willcox Localized Ensemble Kalman Dynamic Data Assimilation for Atmospheric Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1018 Adrian Sandu, Emil M. Constantinescu, Gregory R. Carmichael, Tianfeng Chai, John H. Seinfeld, and Dacian D˘ aescu

XXXIV Table of Contents – Part I

Data Assimilation in Multiscale Chemical Transport Models . . . . . . . . . . . 1026 Lin Zhang and Adrian Sandu Building a Dynamic Data Driven Application System for Hurricane Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1034 Gabrielle Allen A Dynamic Data Driven Wildland Fire Model . . . . . . . . . . . . . . . . . . . . . . . 1042 Jan Mandel, Jonathan D. Beezley, Lynn S. Bennethum, Soham Chakraborty, Janice L. Coen, Craig C. Douglas, Jay Hatcher, Minjeong Kim, and Anthony Vodacek Ad Hoc Distributed Simulation of Surface Transportation Systems . . . . . 1050 R.M. Fujimoto, R. Guensler, M. Hunter, K. Schwan, H.-K. Kim, B. Seshasayee, J. Sirichoke, and W. Suh Cyberinfrastructure for Contamination Source Characterization in Water Distribution Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058 Sarat Sreepathi, Kumar Mahinthakumar, Emily Zechman, Ranji Ranjithan, Downey Brill, Xiaosong Ma, and Gregor von Laszewski Integrated Decision Algorithms for Auto-steered Electric Transmission System Asset Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1066 James McCalley, Vasant Honavar, Sarah Ryan, William Meeker, Daji Qiao, Ron Roberts, Yuan Li, Jyotishman Pathak, Mujing Ye, and Yili Hong DDDAS for Autonomic Interconnected Systems: The National Energy Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074 C. Hoffmann, E. Swain, Y. Xu, T. Downar, L. Tsoukalas, P. Top, M. Senel, M. Bell, E. Coyle, B. Loop, D. Aliprantis, O. Wasynczuk, and S. Meliopoulos Implementing Virtual Buffer for Electric Power Grids . . . . . . . . . . . . . . . . . 1083 Rong Gao and Lefteri H. Tsoukalas Enhanced Situational Awareness: Application of DDDAS Concepts to Emergency and Disaster Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1090 Gregory R. Madey, Albert-L´ aszl´ o Barab´ asi, Nitesh V. Chawla, Marta Gonzalez, David Hachen, Brett Lantz, Alec Pawling, Timothy Schoenharl, G´ abor Szab´ o, Pu Wang, and Ping Yan AIMSS: An Architecture for Data Driven Simulations in the Social Sciences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098 Catriona Kennedy, Georgios Theodoropoulos, Volker Sorge, Edward Ferrari, Peter Lee, and Chris Skelcher

Table of Contents – Part I XXXV

Bio-terror Preparedness Exercise in a Mixed Reality Environment . . . . . . 1106 Alok Chaturvedi, Chih-Hui Hsieh, Tejas Bhatt, and Adam Santone Dynamic Tracking of Facial Expressions Using Adaptive, Overlapping Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114 Dimitris Metaxas, Atul Kanaujia, and Zhiguo Li Realization of Dynamically Adaptive Weather Analysis and Forecasting in LEAD: Four Years Down the Road . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1122 Lavanya Ramakrishnan, Yogesh Simmhan, and Beth Plale Active Learning with Support Vector Machines for Tornado Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1130 Theodore B. Trafalis, Indra Adrianto, and Michael B. Richman Adaptive Observation Strategies for Forecast Error Minimization . . . . . . 1138 Nicholas Roy, Han-Lim Choi, Daniel Gombos, James Hansen, Jonathan How, and Sooho Park Two Extensions of Data Assimilation by Field Alignment . . . . . . . . . . . . . 1147 Sai Ravela A Realtime Observatory for Laboratory Simulation of Planetary Circulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155 S. Ravela, J. Marshall, C. Hill, A. Wong, and S. Stransky Planet-in-a-Bottle: A Numerical Fluid-Laboratory System . . . . . . . . . . . . . 1163 Chris Hill, Bradley C. Kuszmaul, Charles E. Leiserson, and John Marshall Compressed Sensing and Time-Parallel Reduced-Order Modeling for Structural Health Monitoring Using a DDDAS . . . . . . . . . . . . . . . . . . . . . . . 1171 J. Cortial, C. Farhat, L.J. Guibas, and M. Rajashekhar Multi-level Coupling of Dynamic Data-Driven Experimentation with Material Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1180 John G. Michopoulos and Tomonari Furukawa Evaluation of Fluid-Thermal Systems by Dynamic Data Driven Application Systems - Part II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189 D. Knight, Q. Ma, T. Rossman, and Y. Jaluria Dynamic Data-Driven Fault Diagnosis of Wind Turbine Systems . . . . . . . 1197 Yu Ding, Eunshin Byon, Chiwoo Park, Jiong Tang, Yi Lu, and Xin Wang Building Verifiable Sensing Applications Through Temporal Logic Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205 Asad Awan, Ahmed Sameh, Suresh Jagannathan, and Ananth Grama

XXXVI Table of Contents – Part I

Dynamic Data-Driven Systems Approach for Simulation Based Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213 Tahsin Kurc, Xi Zhang, Manish Parashar, Hector Klie, Mary F. Wheeler, Umit Catalyurek, and Joel Saltz DDDAS/ITR: A Data Mining and Exploration Middleware for Grid and Distributed Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222 Jon B. Weissman, Vipin Kumar, Varun Chandola, Eric Eilertson, Levent Ertoz, Gyorgy Simon, Seonho Kim, and Jinoh Kim A Combined Hardware/Software Optimization Framework for Signal Representation and Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1230 Melina Demertzi, Pedro Diniz, Mary W. Hall, Anna C. Gilbert, and Yi Wang Validating Evolving Simulations in COERCE . . . . . . . . . . . . . . . . . . . . . . . . 1238 Paul F. Reynolds Jr., Michael Spiegel, Xinyu Liu, and Ross Gore Equivalent Semantic Translation from Parallel DEVS Models to Time Automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1246 Shoupeng Han and Kedi Huang Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255

A Composite Finite Element-Finite Difference Model Applied to Turbulence Modelling Lale Balas and Asu İnan Department of Civil Engineering, Faculty of Engineering and Architecture Gazi University, 06570 Ankara, Turkey [email protected], [email protected]

Abstract. Turbulence has been modeled by a two equation k-ω turbulence model to investigate the wind induced circulation patterns in coastal waters. Predictions of the model have been compared by the predictions of two equation k-ε turbulence model. Kinetic energy of turbulence is k, dissipation rate of turbulence is ε, and frequency of turbulence is ω. In the three dimensional modeling of turbulence by k-ε model and by k-ω model, a composite finite element-finite difference method has been used. The governing equations are solved by the Galerkin Weighted Residual Method in the vertical plane and by finite difference approximations in the horizontal plane. The water depths in coastal waters are divided into the same number of layers following the bottom topography. Therefore, the vertical layer thickness is proportional to the local water depth. It has been seen that two equation k-ω turbulence model leads to better predictions compared to k-ε model in the prediction of wind induced circulation in coastal waters. Keywords: Finite difference, finite element, modeling, turbulence, coastal.

1 Introduction There are different applications of turbulence models in the modeling studies of coastal transport processes. Some of the models use a constant eddy viscosity for the whole flow field, whose value is found from experimental or from trial and error calculations to match the observations to the problem considered. In some of the models, variations in the vertical eddy viscosity are described in algebraic forms. Algebraic or zero equation turbulence models invariably utilize the Boussinesq assumption. In these models mixing length distribution is rather problem dependent and therefore models lack universality. Further problems arise, because the eddy viscosity and diffusivity vanish whenever the mean velocity gradient is zero. To overcome these limitations, turbulence models were developed which accounted for transport or history effects of turbulence quantities by solving differential transport equations for them. In one-equation turbulence models, for velocity scale, the most meaningful scale is k0.5, where k is the kinetic energy of the turbulent motion per unit mass[1]. In one-equation models, it is difficult to determine the length scale distribution. Therefore the trend has been to move on to two-equation models Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1 – 8, 2007. © Springer-Verlag Berlin Heidelberg 2007

2

L. Balas and A. İnan

which determine the length scale from a transport equation. One of the two equation models is k-ε turbulence model in which the length scale is obtained from the transport equation of dissipation rate of the kinetic energy ε [2],[3]. The other two equation model is k-ω turbulence model that includes two equations for the turbulent kinetic energy k and for the specific turbulent dissipation rate or the turbulent frequency ω [4].

2 Theory The implicit baroclinic three dimensional numerical model (HYDROTAM-3), has been improved by a two equation k-ω turbulence model. Developed model is capable of computing water levels and water particle velocity distributions in three principal directions by solving the Navier-Stokes equations. The governing hydrodynamic equations in the three dimensional cartesian coordinate system with the z-axis vertically upwards, are [5],[6],[7],[8]:

∂u ∂v ∂w + + =0 ∂x ∂y ∂z

(1)

∂u ∂u ∂u ∂u ∂ ∂u ∂ ∂u ∂v ∂ ∂u ∂w 1 ∂p +2 (ν h )+ (ν h( + ))+ (ν z( + )) +u +v +w = fv∂t ∂x ∂y ∂z ∂x ∂y ∂y ∂x ∂z ∂z ∂x ρ o ∂x ∂x

(2)

1 ∂p ∂ ∂v ∂v ∂v ∂v ∂v ∂ ∂v ∂u ∂ ∂v ∂w +u +v +w = - fu+2 (ν h )+ (ν h( + ))+ (ν z( + )) ∂t ∂x ∂y ∂z ∂x ∂y ∂z ∂z ∂y ρ o ∂y ∂y ∂y ∂x

(3)

∂w ∂w ∂w ∂w 1 ∂p ∂ ∂w ∂v ∂ ∂w ∂u ∂ ∂w −g + (ν h( + ))+ (ν h( + ))+ (ν z ) +u +v +w =∂t ∂x ∂y ∂z ρ o ∂z ∂y ∂y ∂z ∂x ∂x ∂z ∂z ∂z

(4)

where, x,y:horizontal coordinates, z:vertical coordinate, t:time, u,v,w:velocity components in x,y,z directions at any grid locations in space, vz:eddy viscosity coefficients in z direction, vh:horizontal eddy viscosity coefficient, f:corriolis coefficient, ρ(x,y,z,t):water density, g:gravitational acceleration, p:pressure. As the turbulence model, firstly, modified k-ω turbulence model is used. Model includes two equations for the turbulent kinetic energy k and for the specific turbulent dissipation rate or the turbulent frequency ω. Equations of k-ω turbulence model are given by the followings. ∂⎡ ∂k ⎤ ∂ ⎡ dk ∂ ⎡ * ∂k ⎤ ∂k ⎤ = ⎢σ ν z ⎥ + P+ ⎢σ *ν h ⎥+ ⎢σ *ν h ⎥ −β *ϖk ∂z ⎦ ∂x ⎣ ∂x ⎦ ∂y ⎣ dt ∂z ⎣ ∂y ⎦

dϖ ∂ = dt ∂z

∂ ⎡ * ∂ϖ ⎤ ∂ ⎡ * ∂ϖ ⎤ ⎡ * ∂ϖ ⎤ ϖ 2 ⎢σ ν z ∂z ⎥ +α k P + ∂x ⎢σ ν h ∂x ⎥+ ∂y ⎢σ ν h ∂y ⎥ − βϖ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

(5)

(6)

A Composite Finite Element-Finite Difference Model

3

The stress production of the kinetic energy P, and eddy viscosity νz are defined by;

⎡ ∂u 2 ⎛ ∂v ⎞2 ⎛ ∂u ∂v ⎞2 ⎤ ⎛ ⎞ P = ν h ⎢2 ⎜ ⎟ +2 ⎜⎜ ⎟⎟ +⎜⎜ + ⎟⎟ ⎥ +ν z ⎢ ⎝ ∂x ⎠ ⎝ ∂y ⎠ ⎝ ∂y ∂x ⎠ ⎥⎦ ⎣

⎡⎛ ∂u ⎞2 ⎛ ∂v ⎞2 ⎤ ⎢⎜ ⎟ +⎜ ⎟ ⎥ ; ⎢⎣⎝ ∂z ⎠ ⎝ ∂z ⎠ ⎥⎦

νz=

k

ϖ

(7)

At high Reynolds Numbers(RT), the constants are used as; α=5/9, β=3/40, β*= 9/100,σ=1/2 and σ*=1/2. Whereas at lower Reynolds numbers they are calculated as;

α*=

k 1/ 40+ RT / 6 5 1/10+ RT / 2.7 * −1 9 5/18+( RT /8) 4 ;α= (α ) ; R T = ; β *= 1+ RT / 6 9 1+ RT / 2.7 100 1+( RT /8) 4 ϖν

(8)

Secondly, as the turbulence model a two equation k-ε model has been applied. Equations of k-ε turbulence model are given by the followings. ∂k ∂k ∂k ∂k ∂ ⎛ v ∂k ⎞ ∂ ∂k ∂ ⎛ ∂k ⎞ ⎟+ P −ε + ⎛⎜ν h ⎞⎟+ ⎜⎜ν h ⎟⎟ +u +v + w = ⎜⎜ z ⎟ ∂t ∂x ∂y ∂z ∂z ⎝ σ k ∂z ⎠ ∂x ⎝ ∂x ⎠ ∂y ⎝ ∂y ⎠

(9)

ε ε 2 ∂ ⎛ ∂ε ⎞ ∂ ⎛ ∂ε ⎞ ∂ε ∂ε ∂ε ∂ε ∂ ⎛ v ∂ε ⎞ ⎟ +C1ε P −C2ε + ⎜ν h ⎟ + ⎜ν h ⎟ +u +v + w = ⎜⎜ z ∂t ∂x ∂y ∂z ∂z ⎝ σ ε ∂z ⎟⎠ k k ∂x ⎝ ∂x ⎠ ∂y ⎜⎝ ∂y ⎟⎠

(10)

where, k :Kinetic energy, ε:Rate of dissipation of kinetic energy, P: Stress production of the kinetic energy. The following universal k-ε turbulence model empirical constants are used and the vertical eddy viscosity is calculated by:

vz = Cμ

k2

ε

; Cμ=0.09, σε=1.3, C1ε=1.44, C2ε=1.92.

(11)

Some other turbulence models have also been widely applied in three dimensional numerical modeling of wind induced currents such as one equation turbulence model and mixing length models. They are also used in the developed model HYROTAM3, however it is seen that two equation turbulence models give better predictions compared to the others.

3 Solution Method Solution method is a composite finite difference-finite element method. Equations are solved numerically by approximating the horizontal gradient terms using a staggered finite difference scheme (Fig.1a). In the vertical plane however, the Galerkin Method of finite elements is utilized. Water depths are divided into the same number of layers following the bottom topography (Fig.1b). At all nodal points, the ratio of the length (thickness) of each element (layer) to the total depth is constant. The mesh size may be varied in the horizontal plane. By following the finite element approach, all the variables at any point over the depth are written in terms of the discrete values of these variables at the vertical nodal points by using linear shape functions.

4

L. Balas and A. İnan

z −z z− z ~ G= N1G1k + N 2 G2k ; N1 = 2 ; N 2 = 1 ; lk = z 2 − z1 lk lk

(12)

~

where G :shape function; G: any of the variables, k: element number; N1,N2: linear interpolation functions; lk:length of the k’th element; z1,z2:beginning and end elevations of the element k; z: transformed variable that changes from z1 to z2 in an element.

(a)

(b)

Fig. 1. a) Horizontal staggered finite difference scheme, ○: longitudinal horizontal velocity, u; □: lateral horizontal velocity, v; *: all other variables b) Finite element scheme throughout the water depth

After the application of the Galerkin Method, any derivative terms with respect to horizontal coordinates appearing in the equations are replaced by their central finite difference approximations. The system of nonlinear equations is solved by the Crank Nicholson Method which has second order accuracy in time. Some of the finite difference approximations are given in the following equations.

(

)

(

)

(Δxi +Δxi+1 ) li, j −li−1, j (Δxi +Δxi−1 ) li+1, j −li, j ⎛ ∂l ⎞ + ⎜ ⎟ = ⎝ ∂x ⎠ i , j (Δx +Δx )⎛ Δx + Δxi +1 +Δxi −1 ⎞ (Δx +Δx )⎛ Δx + Δxi +1 +Δxi −1 ⎞ ⎟ ⎟ i i −1 ⎜ i i i +1 ⎜ i 2 2 ⎝ ⎠ ⎝ ⎠ ⎛ ∂l ⎞ ⎜⎜ ⎟⎟ = ⎝ ∂y ⎠i , j

(13)

(Δy j +Δy j+1 )(li, j −li, j −1 ) + (Δy j +Δy j−1 )(li, j+1 −li, j ) (Δy j +Δy j −1 )⎛⎜⎜ Δy j + Δy j−1 +2Δy j+1 ⎞⎟⎟ (Δy j +Δy j+1 )⎛⎜⎜ Δy j + Δy j −1 +2Δy j +1 ⎞⎟⎟ ⎝



(

)



(14)



(

)

(Δxi +Δxi+1 ) Ci, j −Ci−1, j (Δxi +Δxi−1 ) Ci+1, j −Ci, j ⎛ ∂C ⎞ + ⎟ = ⎜ ⎝ ∂x ⎠i , j (Δx +Δx )⎛ Δx + Δxi +1 +Δxi −1 ⎞ (Δx +Δx )⎛ Δx + Δxi +1 +Δxi−1 ⎞ ⎟ ⎟ i i −1 ⎜ i i i +1 ⎜ i 2 2 ⎝ ⎠ ⎝ ⎠

(15)

A Composite Finite Element-Finite Difference Model

⎛ ∂C ⎞ ⎜⎜ ⎟⎟ = ⎝ ∂y ⎠ i , j

(Δy j +Δy j+1 )(Ci, j −Ci, j−1 ) + (Δy j +Δy j−1 )(Ci, j+1−Ci, j ) (Δy j +Δy j−1 )⎛⎜⎜ Δy j + Δy j−1+2Δy j+1 ⎞⎟⎟ (Δy j +Δy j+1 )⎛⎜⎜ Δy j + Δy j−1+2Δy j+1 ⎞⎟⎟ ⎝

⎛ ∂ 2C ⎞ ⎜ ⎟ ⎜ ∂y 2 ⎟ =2( Δy +Δy j j −1 ⎝ ⎠ i, j 2

(

+





Ci , j −1

⎜ ⎝

j

2

⎟ ⎠

Ci , j +1

(Δy j +Δy j+1 )⎛⎜ Δy ⎜ ⎝

2

Δy j +1 +Δy j −1 ⎞ ⎟ j+ ⎟ 2 ⎠

(16)



Ci , j Δy j +Δy j −1 Δy j +Δy j +1

)⎛⎜ Δy + Δy j−1+Δy j+1 ⎞⎟ ( −

)(

2

)

2

⎛ Δx ⎞ ⎛ Δx ⎞ ui , j ⎜ i +Δxi−1 ⎟ ui +1, j ⎜ i +Δxi −1 ⎟ 2 2 ⎠+ ⎝ ⎠ + ⎝ u 1 = i + , j 4Δxi −1 (Δxi +Δxi −1 ) ( ) 2 Δ x 2 Δ x + Δ x i − 1 i i − 1 2 ui −1, j (Δxi )2

( )

vi , j −1 Δy j 1= i, j+ 4Δy j −1 Δy j +Δy j −1 2

(

v

⎛ Δy j ⎞ ⎛ Δy j ⎞ +Δy j −1 ⎟⎟ vi , j +1 ⎜⎜ +Δy j −1 ⎟⎟ vi , j ⎜⎜ 2 2 ⎠+ ⎝ ⎠ + ⎝ 2Δy j −1 2 Δy j +Δy j −1

)

(17)

)

⎛ ∂ 2C ⎞ Ci−1, j C i, j ⎜ ⎟ =2( − ⎜ ∂x 2 ⎟ (Δx i−1+Δx i )⎛ Δx + Δx i−1+Δx i+1 ⎞ (Δx i−1+Δx i )(Δx i+1+Δx i ) ⎝ ⎠ i, j ⎜ i ⎟ 2 2 2 2 ⎝ ⎠ Ci +1, j ) + (Δxi +1 +Δxi )⎛ Δx + Δxi −1 +Δxi +1 ⎞ ⎟ ⎜ i 2 2 ⎠ ⎝

2

5

(

)

(18)

(19)

(20)

4 Model Applications Simulated velocity profiles by using k-ε turbulence model, k-ω turbulence model have been compared with the experimental results of wind driven turbulent flow of an homogeneous fluid conducted by Tsanis and Leutheusser [9]. Laboratory basin had a length of 2.4 m., a width of 0.72 m. and depth of H=0.05 meters. The Reynolds Number, Rs =

u s Hρ

μ

was 3000 (us is the surface velocity, H is the depth of the

flow, ρ is the density of water and μ is the dynamic viscosity). The velocity profiles obtained by using k-ε turbulence model and k-ω turbulence model are compared with the measurements in Fig.2a and vertical eddy viscosity distributions are given in Fig.2b.

6

L. Balas and A. İnan

(a)

(b)

Fig. 2. a)Velocity profiles, b) Distribution of vertical eddy viscosity (solid line: k-ε turbulence model, dashed line: k-ω turbulence model, *: experimental data)

The root mean square error of the nondimensional horizontal velocity predicted by the k-ε turbulence model is 0.08, whereas it drops to 0.02 in the predictions by using k-ω turbulence model. This basically due to a better estimation of vertical distribution of vertical eddy viscosity by k-ω turbulence model. Developed three dimensional numerical model (HYROTAM-3) has been implemented to Bay of Fethiye located at the Mediterranean coast of Turkey. Water depths in the Bay are plotted in Fig.3a. The grid system used has a square mesh size of 100x100 m. Wind characteristics are obtained from the measurements of the meteorological station in Fethiye for the period of 1980-2002. The wind analysis shows that the critical wind direction for wind speeds more than 7 m/s, is WNWWSW direction. Some field measurements have been performed in the area. The current pattern over the area is observed by tracking drogues, which are moved by currents at the water depths of 1 m., 5 m and 10 m.. At Station I and at Station II shown in Fig.3a, continuous velocity measurements throughout water depth, at Station III water level measurements were taken for 27 days. In the application measurement period has been simulated and model is forced by the recorded wind as shown in Fig. 3b. No significant density stratification was recorded at the site. Therefore water density is taken as a constant. A horizontal grid spacing of Δx=Δy=100 m. is used. Horizontal eddy viscosities are calculated by the sub-grid scale turbulence model and the vertical eddy viscosity is calculated by k-ε turbulence model and also by k-ω turbulence model. The sea bottom is treated as a rigid boundary. Model predictions are in good agreement with the measurements. Simulated velocity profiles over the depth at the end of 4 days are compared with the measurements taken at Station I and Station II and are shown in Fig.4. At Station I, the root mean square error of the horizontal velocity is 0.19 cm/s in the predictions by kε turbulence model and it is 0.11cm/s in the predictions by k-ω turbulence model. At Station II, the root mean square error of the horizontal velocity is 0.16 cm/s in the predictions by k-ε turbulence model and it is 0.09cm/s in the predictions by k-ω turbulence model.

7

-0 -5 -10 -15 -20 -25 -30 -35 -40 -45 -50 -55 -60

A Composite Finite Element-Finite Difference Model

4000

E

Y (m)

3000

2000

1000

1000

2000

3000

4000

5000

6000

X (m)

0

0

-1

-2 Water depth (m)

Water depth (m)

Fig. 3. a)Water depths(m) of Fethiye Bay, +:Station I, •:Station II,∗ :Station III. b) Wind speeds and directions during the measurement period.

-2

-3

-4

-4

-6

-8

-5

-9.65

0

1

2 3 Horizontal velocity (cm/s)

(a)

4

5

-4

-2

0 2 Horizontal velocity (cm/s)

4

6

(b)

Fig. 4. Simulated velocity profiles over the depth at the end of 4 days; solid line: k-ε turbulence model, dashed line: k-ω turbulence model, *: experimental data, a) at Station I, b) at Station II

5 Conclusions From the two equation turbulence models, k-ε model and k-ω model have been used in three dimensional modeling of coastal flows. The main source of coastal turbulence production is the surface current shear stress generated by the wind action. In the numerical solution a composite finite element-finite difference method has been applied. Governing equations are solved by the Galerkin Weighted Residual Method in the vertical plane and by finite difference approximations in the horizontal plane on a staggered scheme. Generally, two-equation turbulence models give improved estimations compared to other turbulence models. In the comparisons of model predictions with both the experimental and field measurements, it is seen that the two equation k-ω turbulence model predictions are better than the predictions of two equation k-ε turbulence model. This is basically due to the better parameterizations of the nonlinear processes in the formulations leading a more reliable and numerically rather easy to handle vertical eddy viscosity distribution in the k-ω turbulence model.

8

L. Balas and A. İnan

Acknowledgment. The author wishes to thank the anonymous referees for their careful reading of the manuscript and their fruitful comments and suggestions.

References 1. Li, Z., Davies, A.G.: Turbulence Closure Modelling of Sediment Transport Beneath Large Waves. Continental Shelf Research (2001) 243-262 2. Bonnet-Verdier,C., Angot P., Fraunie, P., Coantic, M.: Three Dimensional Modelling of Coastal Circulations with Different k-ε Closures. Journal of Marine Systems (2006) 321339 3. Baumert, H., Peters, H.: Turbulence Closure, Steady State, and Collapse into Waves. Journal of Physical Oceanography 34 (2004) 505-512 4. Neary, V.S., Sotiropoulos, F., Odgaard, A.J.: Three Dimensional Numerical Model of Lateral Intake Inflows. Journal of Hyraulic Engineering 125 (1999) 126-140 5. Balas,L., Özhan, E.: An Implicit Three Dimensional Numerical Model to Simulate Transport Processes in Coastal Water Bodies, International Journal for Numerical Methods in Fluids 34 (2000) 307-339 6. Balas, L., Özhan, E.: Three Dimensional Modelling of Stratified Coastal Waters, Estuarine, Coastal and Shelf Science 56 (2002) 75-87 7. Balas, L.: Simulation of Pollutant Transport in Marmaris Bay. China Ocean Engineering, Nanjing Hydraulics Research Institute (NHRI) 15 (2001) 565-578 8. Balas, L., Özhan, E: A Baroclinic Three Dimensional Numerical Model Applied to Coastal Lagoons. Lecture Notes in Computer Science 2658 (2003) 205-212 9. Tsanis,K.I., Leutheusser, H.J.:The Structure of Turbulent Shear-Induced Countercurrent Flow, Journal of Fluid Mechanics 189 (1998) 531-552

Vortex Identification in the Wall Region of Turbulent Channel Flow Giancarlo Alfonsi1 and Leonardo Primavera2 1

Dipartimento di Difesa del Suolo, Università della Calabria Via P. Bucci 42b, 87036 Rende (Cosenza), Italy [email protected] 2 Dipartimento di Fisica, Università della Calabria Via P. Bucci 33b, 87036 Rende (Cosenza), Italy [email protected]

Abstract. Four widely-used techniques for vortex detection in turbulent flows are investigated and compared. The flow of a viscous incompressible fluid in a plane channel is simulated numerically by means of a parallel computational code based on a mixed spectral-finite difference algorithm for the numerical integration of the Navier-Stokes equations. The DNS approach (Direct Numerical Simulation) is followed in the calculations, performed at friction Reynolds number Reτ = 180 . A database representing the turbulent statistically steady state of the velocity field through 10 viscous time units is assembled and the different vortex-identification techniques are applied to the database. It is shown that the method of the “imaginary part of the complex eigenvalue pair of the velocity-gradient tensor” gives the best results in identifying hairpin-like vortical structures in the wall region of turbulent channel flow. Keywords: turbulence, direct numerical simulation, wall-bounded flows, vortex-eduction methods.

1 Introduction Organized vortical structures in wall-bounded flows have been investigated by several authors. One of the first contributions to the issue of the presence of vortices in the wall region of turbulent shear flows is due to Theodorsen [1] who introduced the hairpin vortex model. Robinson [2] confirmed the existence of non-symmetric archand quasi-streamwise vortices on the basis of the evaluation of DNS results. Studies involving the dynamics of hairpin vortices in the boundary layer have been performed by Perry & Chong [3], Acarlar & Smith [4,5], Smith et al. [6], Haidari & Smith [7] and Singer & Joslin [8]. On these bases, a picture of vortex generation and reciprocal interaction in the boundary layer emerges in which processes of interaction of existing vortices with wall-layer fluid involve viscous-inviscid interaction, generation of new vorticity, redistribution of existing vorticity, vortex stretching near the wall and vortex relaxation in the outer region. The process of evolution of a hairpin vortex involves the development of vortex legs in regions of increasing shear with intensification of vorticity in the legs themselves. The leg of a vortex – considered in isolation – may Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 9 – 16, 2007. © Springer-Verlag Berlin Heidelberg 2007

10

G. Alfonsi and L. Primavera

appear as a quasi-streamwise vortex near the wall. The head of a vortex instead, rises through the shear flow, entering regions of decreasing shear. As a consequence, the vorticity in the vortex head diminishes. In spite of the remarkable amount of scientific work accomplished in this field, still there are no definite conclusions on the character of the phenomena occurring in the wall region of wall-bounded turbulent flows. Modern techniques for the numerical integration of the Navier-Stokes equations (advanced numerical methods and highperformance computing) have the ability of remarkably increasing the amount of data gathered during a research of computational nature, bringing to the condition of managing large amounts of data. A typical turbulent-flow database includes all three components of the fluid velocity in all points of a three-dimensional domain, evaluated for an adequate number of time steps of the turbulent statistically steady state. Mathematically-founded methods for the identification of vortical structures from a turbulent-flow database have been introduced by: i) Perry & Chong [9], based on the complex eigenvalues of the velocity-gradient tensor; ii) Hunt et al. [10] and Zhong et al. [11], based on the second invariant of the velocity-gradient tensor; iii) Zhou et al. [12], based on the imaginary part of the complex eigenvalue pair of the velocity-gradient tensor; iv) Jeong & Hussain [13], based on the analysis of the Hessian of the pressure. These techniques for vortex eduction are extensively used in turbulence research but no work exists in which their ability in vortex detection is systematically compared. In the present work the capability of vortex identification of the four techniques outlined above, is analyzed.

2 Vortex-Identification Techniques 2.1 Complex Eigenvalues of the Velocity-Gradient Tensor (Method A) Perry & Chong [9] proposed a definition of a vortex as a region of space where the rate-of-deformation tensor has complex eigenvalues. By considering the system of the Navier-Stokes equations, an arbitrary point O can be chosen in the flow field and a Taylor series expansion of each velocity component can be performed in terms of the space coordinates, with the origin in O. The first-order pointwise linear approximation at point O is: ui = xi = Ai + Aij x j

(1)

and if O is located at a critical point, the zero-order terms Ai are equal to zero, being Aij = ∂ui ∂x j the velocity-gradient tensor (rate-of-deformation tensor, A = ∇u ). In the case of incompressible flow, the characteristic equation of Aij becomes:

λ3 + Qλ + R = 0

(2)

where Q and R are invariants of the velocity-gradient tensor (the other invariant P = 0 by continuity). Complex eigenvalues of the velocity-gradient tensor occur when the discriminant of Aij , D > 0 . According to this method, whether or not a

Vortex Identification in the Wall Region of Turbulent Channel Flow

11

region of vorticity appears as a vortex depends on its environment, i.e. on the local rate-of-strain field induced by the motions outside of the region of interest. 2.2 Second Invariant of the Velocity-Gradient Tensor (Method B)

Hunt et al. [10] and Zhong et al. [11] devised another criterion, in defining a eddy zone a region characterized by positive values of the second invariant Q of the velocity-gradient tensor. The velocity-gradient tensor can be split into symmetric and antisymmetric parts: Aij = Sij + Wij

(3)

being S ij the rate-of-strain tensor (corresponding to the pure irrotational motion) and Wij the rate-of-rotation tensor (corresponding to the pure rotational motion). The second invariant of Aij can be written as: Q = (WijWij − S ij S ij ) 2

(4)

where the first term on the right-hand side of (4) is proportional to the enstrophy density and the second term is proportional to the rate of dissipation of kinetic energy. 2.3

Imaginary Part of the Complex Eigenvalue Pair of the Velocity-Gradient Tensor (Method C)

Zhou et al. [12] adopted the criterion of identifying vortices by visualizing isosurfaces of the imaginary part of the complex eigenvalue pair of the velocity-gradient tensor (actually the square of). By considering equation (2) and defining the quantities: ⎛ R J = ⎜− + ⎜ 2 ⎝

1

3 R 2 Q 3 ⎞⎟ + , 4 27 ⎟⎠

⎛ R K = −⎜ + + ⎜ 2 ⎝

1

3 R 2 Q 3 ⎞⎟ + 4 27 ⎟⎠

(5)

one has:

λ1 = J + K ,

λ2 = −

J +K J −K + 2 2

−3 ,

λ3 = −

J +K J−K − 2 2

−3 .

(6)

The method of visualizing isosurfaces (of the square) of the imaginary part of the complex eigenvalue pair of the velocity-gradient tensor is frame independent and due to the fact that the eigenvalue is complex only in regions of local circular or spiralling streamline, it automatically eliminates regions having vorticity but no local spiralling motion, such as shear layers. 2.4 Analysis of the Hessian of the Pressure (Method D)

Jeong & Hussain [13] proposed a definition of a vortex by reasoning on the issue of the pressure minimum, as follows “... a vortex core is a connected region characterized by two negative eigenvalues of the tensor B = S 2 + W 2 ...“, where S

12

G. Alfonsi and L. Primavera

and W are the symmetric and antisymmetric parts of the velocity-gradient tensor. The gradient of the Navier-Stokes equation is considered and decomposed into symmetric and antisymmetric parts. By considering the symmetric part (the antisymmetric portion is the vorticity-transport equation), one has: DSij Dt

−ν

1 ∂p ∂ Sij + Bij = − ∂xk ∂xk ρ ∂xi ∂x j

(7)

Bij = S ik S kj + WikWkj .

(8)

where:

The existence of a local pressure minimum requires two positive eigenvalues for the Hessian tensor ( ∂p ∂xi ∂x j ). By neglecting the contribution of the first two terms on the left-hand side of equation (7), only tensor (8) is considered to determine the existence of a local pressure minimum due to a vortical motion, i.e. the presence of two negative eigenvalues of B. The tensor B is symmetric by construction, all its eigenvalues are real and can be ordered ( λ1 ≥ λ2 ≥ λ3 ). According to this method a vortex is defined as a connected region of the flow with the requirement that the intermediate eigenvalue of B, λ2 < 0 .

3 Numerical Simulations The numerical simulations are performed with a parallel computational code based on a mixed spectral-finite difference technique. The unsteady Navier-Stokes equation (besides continuity) for incompressible fluids with constant properties in three dimensions and non-dimensional conservative form, is considered (i & j = 1,2,3):

∂ui ∂ ∂p 1 ∂ 2 ui ( + ui u j ) = − + ∂t ∂x j ∂xi Reτ ∂x j ∂x j

(9)

where ui (u ,v ,w ) are the velocity components in the cartesian coordinate system xi (x, y, z ) . Equation (9) is nondimensionalized by the channel half-width h for lenghts, wall shear velocity uτ for velocities, ρuτ2 for pressure and h uτ for time, being Reτ = (uτ h ν ) the friction Reynolds number. The fields are admitted to be periodic in the streamwise (x) and spanwise (z) directions, and equation (9) are Fourier transformed accordingly. The nonlinear terms in the momentum equation are evaluated pseudospectrally by anti-transforming the velocities back in physical space to perform the products (FFTs are used). In order to have a better spatial resolution near the walls, a grid-stretching law of hyperbolic-tangent type is introduced for the grid points along y, the direction orthogonal to the walls. For time advancement, a third-order Runge-Kutta algorithm is implemented and time marching is executed with the fractional-step method. No-slip boundary conditions at the walls and cyclic conditions in the streamwise and spanwise directions are applied to the velocity. More detailed descriptions of the numerical scheme, of its reliability and of the performance obtained on the parallel computers that have been used, can be found in Alfonsi et al.

Vortex Identification in the Wall Region of Turbulent Channel Flow

13

[14] and Passoni et al. [15,16,17]. The characteristic parameters of the numerical simulations are the following. Computing domain: L+x = 1131 , L+y = 360 , L+z = 565 (wall units). Computational grid: N x = 96 , N y = 129 , N z = 64 . Grid spacing: + + Δx + = 11.8 , Δy center = 4.4 , Δy wall = 0.87 , Δz + = 8.8 (wall units). It can be verified that there are 6 grid points in the y direction within the viscous sublayer ( y + ≤ 5 ). After the insertion of appropriate initial conditions, the initial transient of the flow in the channel is simulated, the turbulent statistically steady state is reached and calculated for a time t = 10 δ uτ ( t + = 1800 ). 20000 time steps are calculated with a temporal resolution of Δt = 5 × 10 −4 δ uτ ( Δt + = 0.09 ). In Figure 1, the computed turbulence intensities (in wall units) of present work are compared with the results of Moser et al. [18] at Reτ = 180 . The agreement between the present results and the results of Moser et al. [18] (obtained with a fully spectral code) is rather satisfactory.

Fig. 1. Rms values of the velocity fluctuations normalized by the friction velocity in wall ′ ) , (---) (v rms ′ ) , (···) (wrms ′ ) . Data from Moser et al. [18]: (+) coordinates. Present work: (—) (u rms ′ ) , (×) (v rms ′ ) , (*) (wrms ′ ). (urms

4 Results In Figures 2a-b the vortical structure that occurs at t + = 1065.6 as detected with method A, is represented and visualized from two different points of view (isosurfaces corresponding to the 5% of the maximum value are used in all representations). The top and side views show a structure not very well corresponding to a hairpin vortex. Portions of head and legs are visible, being the neck almost missing. The visualizations of the flow structure shows in practice a sketch of an hairpin, that can be completed only by intuition. Of the four methods examined, method A gives the less satisfactory representation of the flow structure at the bottom wall of the computational domain at t + = 1065.6 .

14

G. Alfonsi and L. Primavera

Fig. 2. Method A. Representation of hairpin vortex: a) top view; b) side view.

Fig. 3. Method B. Representation of hairpin vortex: a) top view; b) side view.

Fig. 4. Method C. Representation of hairpin vortex: a) top view; b) side view.

In Figures 3a-b the flow structure at as educted with method B, is represented. A hairpin-like vortical structure more complete with respect to the former case is visible. The head of the vortex is almost complete and well defined. Of the two legs, one is longer than the other and both are longer with respect to those of method A. In turn, a portion of the vortex neck is missing, as can be seen from Figure 3b. Figures 4a-b show the vortex structure extracted with method C. The Figures show a complete and well-defined hairpin vortex, with legs, neck and head clearly represented and no missing parts anywhere. Of the four eduction techniques tested, this is the best

Vortex Identification in the Wall Region of Turbulent Channel Flow

15

Fig. 5. Method D. Representation of hairpin vortex: a) top view; b) side view.

representation that can be obtained. From Figure 4a it can be noted that the hairpin exhibits two legs of length comparable to those shown in Figure 3a. Figure 4b shows a hairpin neck remarkably more thick with respect to that of Figure 3b. The results obtained with the use of method D are shown in Figures 5a-b. Also in this case a not complete hairpin vortex structure appears. Figure 5b shows that a portion of the neck of the vortex is missing.

5 Concluding Remarks The case of the flow of a viscous incompressible fluid in a plane channel is simulated numerically at Reτ = 180 and a turbulent-flow database is assembled. Four different criteria for vortex eduction are applied to the database and compared, showing that: i) the method of the “complex eigenvalues of the velocity-gradient tensor” gives the less satisfactory results in terms of vortex representation; ii) the methods of the “second invariant of the velocity-gradient tensor” and that based on the “analysis of the Hessian of the pressure” gives intermediate results, in the sense that a hairpin-like vortical structure actually appears, otherwise with missing parts or not optimal definitions; iii) the best results are obtained by using the method of the “imaginary part of the complex eigenvalue pair of the velocity-gradient tensor”.

References 1. Theodorsen, T.: Mechanism of turbulence. In Proc. 2nd Midwestern Mechanics Conf. (1952) 1 2. Robinson, S.K.: Coherent motions in the turbulent boundary layer. Annu. Rev. Fluid Mech. 23 (1991) 601 3. Perry, A.E., Chong, M.S.: On the mechanism of wall turbulence. J. Fluid Mech. 119 (1982) 173 4. Acarlar, M.S., Smith, C.R.: A study of hairpin vortices in a laminar boundary layer. Part 1. Hairpin vortices generated by a hemisphere protuberance. J. Fluid Mech. 175 (1987) 1 5. Acarlar, M.S., Smith, C.R.: A study of hairpin vortices in a laminar boundary layer. Part 2. Hairpin vortices generated by fluid injection. J. Fluid Mech. 175 (1987) 43

16

G. Alfonsi and L. Primavera

6. Smith, C.R., Walker, J.D.A., Haidari A.H., Soburn U.: On the dynamics of near-wall turbulence. Phil. Trans. R. Soc. A 336 (1991) 131 7. Haidari, A.H., Smith, C.R.: The generation and regeneration of single hairpin vortices. J. Fluid Mech. 277 (1994) 135 8. Singer, B.A., Joslin R.D.: Metamorphosis of a hairpin vortex into a young turbulent spot. Phys. Fluids 6 (1994) 3724 9. Perry, A.E., Chong, M.S.: A description of eddying motions and flow patterns using critical-point concepts. Annu. Rev. Fluid Mech. 19 (1987) 125 10. Hunt, J.C.R., Wray, A.A., Moin, P.: Eddies, streams and convergence zones in turbulent flows. In Proc. Center Turbulence Research 1988 Summer Prog. NASA Ames/Stanford University (1988) 193 11. Zhong, J., Huang, T.S., Adrian, R.J.: Extracting 3D vortices in turbulent fluid flow. IEEE Trans. Patt. Anal. Machine Intell. 20 (1998) 193 12. Zhou, J., Adrian, R.J., Balachandar, S., Kendall, T.M.: Mechanisms for generating coherent packets of hairpin vortices in channel flow. J. Fluid Mech. 387 (1999) 353 13. Jeong, J., Hussain, F.: On the definition of a vortex. J. Fluid Mech. 285 (1995) 69 14. Alfonsi, G., Passoni, G., Pancaldo, L., Zampaglione D.: A spectral-finite difference solution of the Navier-Stokes equations in three dimensions. Int. J. Num. Meth. Fluids 28 (1998) 129 15. Passoni, G., Alfonsi, G., Tula, G., Cardu, U.: A wavenumber parallel computational code for the numerical integration of the Navier-Stokes equations. Parall. Comp. 25 (1999) 593 16. Passoni, G., Cremonesi, P., Alfonsi, G.: Analysis and implementation of a parallelization strategy on a Navier-Stokes solver for shear flow simulations. Parall. Comp. 27 (2001) 1665 17. Passoni, G., Alfonsi, G., Galbiati, M.: Analysis of hybrid algorithms for the Navier-Stokes equations with respect to hydrodynamic stability theory. Int. J. Num. Meth. Fluids 38 (2002) 1069 18. Moser, R.D., Kim, J., Mansour, N.N.: Direct numerical simulation of turbulent channel flow up to Reτ = 590 . Phys. Fluids 11 (1999) 943

Numerical Solution of a Two-Class LWR Traffic Flow Model by High-Resolution Central-Upwind Scheme Jianzhong Chen1 , Zhongke Shi1 , and Yanmei Hu2 1

College of Automation, Northwestern Polytechnical University, Xi’an, Shaanxi 710072, P.R. China [email protected],[email protected] 2 College of Science, Chang’an University, Xi’an, Shaanxi 710064, P.R. China [email protected]

Abstract. A high-resolution semi-discrete central-upwind scheme for solving a two class Lighthill-Whitham-Richards (LWR) traffic flow model is investigated in this paper. This scheme is based on combining a fourthorder central weighted essentially nonoscillatory (CWENO) reconstruction with semi-discrete central-upwind numerical flux. The CWENO re construction is chosen to improve the accuracy and guarantee the non-oscillatory behavior of the present method. The strong stability preserving Runge-Kutta method is used for time integration. The resulting method is applied to simulating several tests such as mixture of the two traffic flows. The simulated results illustrate the effectiveness of the present method. Keywords: Traffic flow model, central-upwind scheme, CWENO reconstruction.

1

Introduction

Continuum traffic flow models are of great practical importance in many applications such as traffic simulation, traffic control, and, etc. The Lighthill-WhithamRichards (LWR) model proposed independently by Lighthill and Whitham [1] and Richards [2] is the forerunner for all other continuum traffic flow models. In recent years an amount of research was done in implementing and extending the LWR model. Zhang [3] and Jiang et al. [4] proposed higher-order continuum models. Wong and Wong [5] presented a multi-class LWR traffic flow model(MCLWR model). For numerical method, the Lax-Friedrichs scheme was used to solve the MCLWR model in [5]. The Lax-Friedrichs scheme is only first-order accurate and yields a relatively poor resolution due to the excessive numerical dissipation. Recently, the Godunov scheme was also employed to solve the LWR model [6] and higher-order model [7]. However, the Godunov scheme needs to use exact or approximate Riemann solvers, which make the scheme complicated and time-consuming. Zhang, et al. [8] pointed out that the scalar LWR model and those higher-order continuum models proposed so far contain hyperbolic partial differential equations. One important feature of this type equation is that it Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 17–24, 2007. c Springer-Verlag Berlin Heidelberg 2007 

18

J. Chen, Z. Shi, and Y. Hu

admits both smooth and discontinuous solutions such as shocks. However, the lower order numerical methods may produce smeared solutions near discontinuities due to excessive numerical viscosity. The high-order scheme can provide the satisfactory resolution. Moreover, the problems in which solutions contain rich smooth region structures can be resolved by the high-order scheme on a relatively small number of grid points. To embody traffic phenomena described by traffic flow model completely and resolve discontinuities well, a high-resolution shockcapturing numerical method is required. A recent application of the weighted essentially non-oscillatory (WENO) scheme can be found in [8,9]. In this paper we study another type shock-capturing scheme, the so-called high-resolution semi-discrete central-upwind schemes originally introduced in [10], which have attracted considerable attention more recently. These schemes enjoy the advantages of high-resolution central schemes. They are free of Riemann solvers, require no characteristic decompositions and retain high resolution similar to the upwind results. At the same time, they have an upwind nature. These features make the semi-discrete central-upwind schemes a universal, efficient and robust tool for a wide variety of applications. With regard to its application to traffic flow problems, we have not yet seen any research works. In this work the semi-discrete central-upwind scheme combined with fourth-order central WENO (CWENO) reconstruction [11] is applied to a two class LWR traffic flow model. This paper is organized as follows. Section 2 presents the two class LWR traffic flow model. In section 3 we describe our numerical method. Numerical simulations are carried out in section 4. The conclusions are given in section 5.

2

The Tow-Class Model

The MCLWR model [5] describes the characteristics of traffic flow of M classes of road users with different speed choice behaviors in response to the same density when traveling on a highway section. There are some difficulties to compute the eigenvalues and prove the hyperbolicity of the model for M > 3. In this paper, we consider the two-class(M = 2) LWR traffic flow model, which can be written in conservation form as (1) ut + f(u)x = 0 , where u is the vector of conserved variables and f (u) is the vector of fluxes. These are given respectively by     ρ ρ u (ρ) u = 1 , f(u) = 1 1 , ρ2 ρ2 u2 (ρ) where ρ1 and ρ2 are the densities for Class 1 and Class 2 traffic, respectively, ρ = ρ1 + ρ2 is the total density, and u1 (ρ) and u2 (ρ) are the velocity-density relationships. The two eigenvalues of the Jacobian are √ λ1,2 = (u1 (ρ) + ρ1 u1 (ρ) + u2 (ρ) + ρ2 u2 (ρ) ± Δ)/2 , (2)

Numerical Solution of a Two-Class LWR Traffic Flow Model

19

where Δ = ((u1 (ρ) + ρ1 u1 (ρ)) − (u2 (ρ) + ρ2 u2 (ρ)))2 + 4ρ1 ρ2 u1 (ρ)u2 (ρ) .

(3)

Since Δ ≥ 0 and λ1,2 are real, the model is hyperbolic.

3

Numerical Scheme

For simplicity, let us consider a uniform grid, xα = αΔx, tn = nΔt, where Δx and Δt are the uniform spatial and time step, respectively. The cell average in the spatial cell Ij = [xj−1/2 , xj+1/2 ] at time t = tn is denoted by 1 n n unj (t) = Δx Ij u(x, t ) dx. Starting with the given cell averages {uj } , a piecewise polynomial interpolant is reconstructed  ˜ (x) = u pnj (x)χj (x) . (4) j

Here χj is the characteristic function of the interval Ij and pnj (x) is a polynomial of a suitable degree. Different semi-discrete central-upwind schemes will be characteristic of different reconstructions. Given such a reconstruction, the ˜ at the interface points {xj+1/2 } are denoted by u+ point-values of u j+1/2 = − n n n n pj+1 (xj+1/2 , t ) and uj+1/2 = pj (xj+1/2 , t ) . The discontinuities of the construction (4) at the cell interfaces propagate with right- and left-sided local speeds, which can be estimated by   

∂f  − ∂f  + u u a+ = max λ , λ ,0 N N j+1/2 j+1/2 j+1/2 ∂u ∂u   

∂f  − ∂f  + u u = min λ a− , λ ,0 . (5) 1 1 j+1/2 j+1/2 j+1/2 ∂u ∂u Here λ1 , · · · , λN denote the N eigenvalues of ∂f/∂u. The semi-discrete centralupwind scheme for the spatial discretization of equation (1) can be given by(see [10] for the detailed derivation) Hj+1/2 (t) − Hj−1/2 (t) d uj (t) = − , dt Δx

(6)

where the numerical fluxes Hj+1/2 is Hj+1/2 (t) =

− − + a+ j+1/2 f(uj+1/2 ) − aj+1/2 f(uj+1/2 ) − a+ j+1/2 − aj+1/2 −

a+ j+1/2 aj+1/2 − + + − u u+ . − j+1/2 aj+1/2 − aj+1/2 j+1/2

(7)

Note that different semi-discrete central-upwind schemes are typical of different reconstructions. The accuracy of the semi-discrete scheme (6)-(7) depends

20

J. Chen, Z. Shi, and Y. Hu

on the accuracy of the reconstruction (4). One can use the second order piecewise linear reconstruction, the third-order piecewise quadratic reconstruction, highly accurate essentially non-oscillatory (ENO) reconstruction, highly accurate WENO reconstruction or highly accurate CWENO reconstruction. In this work, we have used an fourth-order CWENO reconstruction proposed in [11] to compute the point values u± j+1/2 . To simplify notations, the superscript n will be omitted below. In each cell Ij , the reconstruction, pj (x), is a convex combination of three quadratic polynomials, ql (x), l = j − 1, j, j + 1, j j pj (x) = ωj−1 qj−1 (x) + ωjj qj (x) + ωj+1 qj+1 (x) ,

where ωlj are the weights which satisfy ωlj ≥ 0 and polynomials ql (x), l = j − 1, j, j + 1, are given by

j+1 l=j−1

ωlj = 1. The quadratic

˜l + u ˜ l (x − xl ) + u ˜ l (x − xl )2 , l = j − 1, j, j + 1 . ql (x) = u ˜ l = Here u defined as

ul+1 −2ul +ul−1 ˜ l ,u x2

ωlj =

=

ul+1 +ul−1 2x

αjl αjj−1 + αjj + αjj+1

, αjl =

˜ l = ul − and u Cl

( + IS jl )2

(8)

˜  u l 24 .

(9)

The weights ωlj are

, l = j − 1, j, j + 1 ,

(10)

where Cj−1 = Cj+1 = 3/16, Cj = 5/8. The constant is used to prevent the denominators from becoming zero and is taken as = 10−6 . The smoothness indicators, IS jl , are calculated by 13 1 (uj−2 − 2uj−1 + uj )2 + (uj−2 − 4uj−1 + 3uj )2 , 12 4 13 1 (uj−1 − 2uj + uj+1 )2 + (uj−1 − uj+1 )2 , ISjj = 12 4 13 1 j (uj − 2uj+1 + uj+2 )2 + (3uj − 4uj+1 + uj+2 )2 . ISj+1 = 12 4

j ISj−1 =

(11)

The time discretization of the semi-discrete scheme is achieved by third-order strong stability preserving Runge-Kutta solver [12].

4

Numerical Examples

In this section, we choose several numerical examples presented in [9] as out test case. The results demonstrate the performance of the present method for the two-class LWR traffic flow model. In all examples, the following velocity-density relationships are chose: u1 (ρ) = u1f (1 − ρ/ρm ), u2 (ρ) = u2f (1 − ρ/ρm ) ,

(12)

where ρm is maximal density and u1f and u2f are the free flow velocity for Class 1 and Class 2 traffic, respectively. Moreover, the variables of space, time, density

Numerical Solution of a Two-Class LWR Traffic Flow Model

21

and velocity are scaled by L, T , ρm and uf , where L is the length of the road, T is computational time and uf = max(u1f , u2f ). A variable is also non-dimensional if it is not followed by its unit. Example 1: Mixture of the two traffic flows. The computational parameters are L = 6000m, T = 400s, Δx = 60m, Δt = 0.4s, u1f = 14m/s and u2f = 20m/s. The initial data is taken as the following Riemann problem:  (0, 0.4) , x < 0.1 , u(x, 0) = (13) (0.4, 0) , x > 0.1 . In this test Class 2 traffic will mix in Class 1 traffic, which causes the increase of total density. Its solution contains a shock, a constant region and a rarefaction. The total density computed by the presented method is shown in Fig. 1. To illustrate the advantage of using high-order schemes, the Godunov scheme with the Rusanov approximate Riemann solver [13,14] is also adopted to compute the same problem using the same parameters. Here and below, this scheme is abbreviated to GR. The scheme presented in this paper is abbreviated to CP4. The result computed by GR scheme is presented in Fig. 2. This comparison demonstrates the clear advantage of SD4 scheme over GR scheme. The SD4 scheme has the higher shock resolution and smaller numerical dissipation. 0.6

density

0.55

0.5

0.45

0.4 1 0.8 0.6

t

0.4 0.2 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

Fig. 1. Example 1: The total density. The solution is computed with CP4 scheme.

Example 2: Separation of the two traffic flows. The parameters are L = 8000m, T = 400s, Δx = 80m, Δt = 0.4s, u1f = 10m/s and u2f = 20m/s. The Riemann initial data is used:  (0.2, 0) , x < 0.1 , u(x, 0) = (14) (0, 0.2) , x > 0.1 .

22

J. Chen, Z. Shi, and Y. Hu 0.6

density

0.55

0.5

0.45

0.4 1 0.8 0.6

t

0.4 0.2 0

0

0.2

0.1

0.3

0.4

0.6

0.5

0.8

0.7

0.9

1

x

Fig. 2. Example 1: The total density. The solution is computed with GR scheme.

0.2

density

0.15 0.1 0.05 0 0 0.2 0.4

t

0 0.2

0.6 0.4 0.8

0.6

x

0.8 1

1

Fig. 3. Example 2: The total density. The solution is computed with CP4 scheme.

Note that u2 = 16m/s > u1f and thus Class 1 drivers can not keep up with Class 2 drivers. A vacuum region is formed between Class 1 and Class 2 traffic. This test has solution consisting of a right shock, a constant region and a left rarefaction. Figs. 3 and 4 show the results obtained with CP4 and GR scheme, respectively. It can be seen that discontinuities are well resolved by CP4 scheme. Example 3: A close following of the two traffic flows. The parameters are L = 4000m, T = 240s, Δx = 80m, Δt = 0.4s, u1f = 14m/s and u2f = 20m/s. The Riemann initial data is used:  (0.2, 0) , x < 0.1 , u(x, 0) = (15) (0, 0.44) , x > 0.1 . The high resolution properties of CP4 scheme are illustrated in Fig. 5.

Numerical Solution of a Two-Class LWR Traffic Flow Model

23

0.2

density

0.15

0.1

0.05

0 0 0.2 0.4

t

0

0.6

0.2 0.4

0.8

0.6 1

0.8

x

1

Fig. 4. Example 2: The total density. The solution is computed with GR scheme.

density

0.4 0.35 0.3 0.25 0.2 1 0.8

1 0.8

0.6

t

0.6

0.4 0.4 0.2

x

0.2 0

0

Fig. 5. Example 3: The total density. The solution is computed with CP4 scheme.

5

Conclusions

As an attempt to simulate traffic flow by high-resolution finite difference schemes, we have applied the semi-discrete central-upwind scheme to a two class LWR traffic flow model in this paper. The numerical results demonstrate that the semidiscrete central-upwind scheme resolve the shock and rarefaction waves well. This universal, efficient and high-resolution scheme will be implemented and applied to higher-order continuum models and multi-class models to simulate traffic flow in our future work.

24

J. Chen, Z. Shi, and Y. Hu

References 1. Lighthill, M. J., Whitham, G. B.: On kinematic waves (II)-A theory of traffic flow on long crowed roads. Proc. R. Sco. London, Ser. A 22 (1955) 317-345 2. Richards, P. I.: Shock waves on the highway. Oper. Res. 4 (1956) 42-51 3. Zhang, H. M.: A non-equilibrium traffic model devoid of gas-like behavior. Transportation Research B 36 (2002) 275-290 4. Jiang, R., Wu, Q. S., Zhu, Z. J.: A new continuum model for traffic flow and numerical tests. Transportation Research B 36 (2002) 405-419 5. Wong, G. C. K., Wong, S. C.: A multi-class traffic flow model-an extension of LWR model with heterogeneous drivers. Transportation Research A 36 (2002) 827-841 6. Lebacque, J. P.: The Godunov scheme and what it means for first order traffic flow models. In: Lesort, J. B. (eds.): Proceedings of the 13th International Symposium on Transportation and Traffic Theory. Elsevier Science Ltd., Lyon France (1996) 647-677 7. Zhang, H. M.: A finite difference approximation of a non-equilibrium traffic flow model. Transportation Research B 35 (2001) 337-365 8. Zhang, M. P., Shu, C.-W., Wong, G. C. K., Wong, S. C.: A weighted essentially nonoscillatory numerical scheme for a multi-class Lighthill-Whitham-Richards traffic flow model. Journal of Computational Physics 191 (2003) 639-659 9. Zhang, P., Liu, R. X., Dai, S. Q.: Theoretical analysis and numerical simulation on a two-phase traffic flow LWR model. Journal of university of science and technology of China 35 (2005) 1-11 10. Kurganov, A., Noelle, S., Petrova, G.: Semi-discrete central-upwind schemes for hyperbolic conservation laws and Hamilton-Jacobi equations. SIAM J. Sci. Comput. 23 (2001) 707-740 11. Levy, D., Puppo, G., Russo, G.: Central WENO schemes for hyperbolic systems of conservation laws. Math. Model. Numer. Anal. 33 (1999) 547-571 12. Gottlieb, S., Shu, C.-W., Tadmor, E.: Strong stability preserving high order time discretization methods. SIAM Rev. 43 (2001) 89-112 13. Toro, E. F.: Riemann Solvers and Numerical Methods for Fluid Dynamics. Springer-Verlag, Berlin Heidelberg New York (1997) 14. Rusanov, V. V.: Calculation of interaction of non-steady shock waves with obstacles. J. Comput. Math. Phys. 1 (1961) 267-279

User-Controllable GPGPU-Based Target-Driven Smoke Simulation Jihyun Ryu1 and Sanghun Park2, 1

Dept. of Applied Mathematics, Sejong University, Seoul 143-747, Republic of Korea [email protected] 2 Dept. of Multimedia, Dongguk University, Seoul 100-715, Republic of Korea [email protected] Abstract. The simulation of fluid phenomena, such as smoke, water, fire, has developed rapidly in computer games, special effects, and animation. The various physics-based methods can result in high quality images. However, the simulation speed is also important issue for consideration in applications. This paper describes an efficient method for controlling smoke simulation, running entirely on the GPU. Our interest is in how to reach given user-defined target smoke states in real time. Given an initial smoke state, we propose to simulate the smoke towards a special target state. This control is made by adding special external force terms to the standard flow equation. Keywords: GPGPU, Navier-Stokes equations, interactive simulation.

1

Introduction

The modeling of natural phenomena such as smoke, fire, and liquid has received considerable attention from the computer graphics industry. This is especially true for visual smoke models, which have many applications in the creation of special effects and interactive games. It is important to produce highly realistic results as well as simulate effects in real time. This becomes a more challenging if the produced animation can be controlled by users. Recently computer graphics researchers have created simulations for controlling fluids. Treuille et al. [2] introduced a method to control fluid flows to obtain the target shapes. However, this method is too slow in shape controlled flow simulation. In fact, in real-time applications such as computer games, the simulation speed is more important than image quality. This paper presents a method of controlling interactive fluids in real time using the GPU (Graphics Processing Unit). It is based on the results of Fattal et al. [1], and our goal is to perform all the steps on the GPU. This technique can interactively create target shapes using computer-generated smoke.

2

Mathematical Background

To simulate the behavior of fluid, we must have a mathematical representation of the state of the fluid at any given time. The greatest quantity to represent 

Corresponding author.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 25–29, 2007. c Springer-Verlag Berlin Heidelberg 2007 

26

J. Ryu and S. Park

is the velocity of the fluid. But the fluid’s velocity varies in both time and space, so we represent this as a vector field. The key to fluid simulation is to take steps in time and determine the velocity field at each time step. We can achieve this by solving special equations. In physics, we assume an incompressible and homogeneous fluid for fluid simulation. This means that density is constant in both time and space. Under these assumptions, the state of the fluid over time can be described using the Navier-Stokes equations for incompressible flow: ∂u 2 ∇ · u = 0 where u(x, t) is the velocity vector ∂t = −u · ∇u − ∇p + ν∇ u + F, of the position vector x at time t, p(x, t) is the pressure, ν is the kinematic viscosity, and F represents any external forces that act on the fluid. In our case, we may simulate the non-viscosity fluid and therefore solve the Euler Equation with ν = 0; ∂u = −u · ∇u − ∇p + F, ∇·u=0 (1) ∂t Let ρ = ρ(x, t) be the density scalar field at position x and time t. In order to describe the transport of the smoke along the fluid’s velocity fields, we solve the additional equation; ∂ρ = −u · ∇ρ (2) ∂t Moreover, the external forces term provides an important means of control over the smoke simulation. In the result of Fattal et al. [1], the special external term F (ρ, ρ∗ ) depends on the smoke density ρ with the target density ρ∗ . This result has the same direction as the gradient vector of ρ∗ . In addition, a “normalized” ∇ρ∗ ∗ gradient can be used by F (ρ, ρ ) ∝ ρ∗ . The blurring filter of ρ∗ must have sufficiently large support, since the target density ρ∗ is constant and ∇ρ∗ = 0. In order to ensure ∇ρ∗ = 0, the blurred version of ρ∗ , denoted by ρ˜∗ , can be used. ˜∗ The force is F (ρ, ρ∗ ) = ρ˜ ∇ρ˜ρ∗ ≡ Fdf where Fdf is named the “driving force”. In summary, we have two modified equations for the controlled fluid simulations. The first is the advection equation for density; the second is the momentum equation using the driving force. ∂ρ = −u · ∇ρ, ∂t

∂u = −u · ∇u + ∇p + Fdf ∂t

(3)

In addition, another external force, denoted by Fui , for user interaction, can be applied to fluid by clicking and dragging with the mouse. The force is computed from the direction and length of the mouse drag.

3

Implementation Details

The goal of this research has been to improve user interaction with computergenerated smoke. We implemented GPGPU (General-Purpose computation on GPUs) based techniques to simulate dynamic smoke that can be described by PDEs. Algorithm 1 shows a method to process the user-controllable interactive target-driven simulation. Fattal et al. [1] introduced offline target-driven

User-Controllable GPGPU-Based Target-Driven Smoke Simulation

27

Algorithm 1. User-controllable interactive target-driven simulation algorithm 1: Load a target key-frame, initialize target density field ρ∗ 2: while (1) do // from user interaction 3: Setup external force Fui and density field ρui // apply gaussian filter and add external force 4: ρ ← ρ˜, u ← u + Fui 5: if (target-driven) then ˜∗ 6: Fdf ← ∇ρ˜ρ∗ ρ, u ← u + Fdf // add driving force 7: end if 8: v ← ∇ × u, u ← u + vΔt // vorticity ∂u = −u · ∇u // advection of velocity 9: ∂t // projection 10: ∇2 p = ∇ · u, u ← u − ∇p // add density 11: ρ ← ρ + ρui ∂ρ = −u · ∇ρ // advection of density 12: ∂t 13: draw(ρ) 14: end while

(a) free

(b) target-driven

(c) target-driven

(d) target-driven

(e) free

(f) target-driven

(g) target-driven

(h) target-driven

Fig. 1. User-controllable interactive simulation snapshots

animation on CPUs, our algorithm is based on this technique. In free simulation mode, users can generate external force Fui and density field ρui from mouse movement. The users can change the current simulation mode to targetdriven by pressing a keyboard button, and the driving force Fdf is applied to current velocity field u. This results in driving the current density field ρ to a pre-defined target image ρ∗ . Gaussian filtering for the blurring operation can be implemented in a fragment shader, since recent shading languages support nested loops in shader programs. All of the operations in the algorithm are implemented efficiently as fragment shaders to maximize the degree of rendering speed enhancement; it is possible for users to control smoke simulation interactively.

28

J. Ryu and S. Park

Table 1. Simulation speeds (frames per sec) where resolution of display windows is 256 × 256 and nji is the number of Jacobian iteration. The results were evaluated on a general purpose PC, equipped with a 3.0 GHz Intel Pentium D processor, 1 GB of main memory, and a graphics card with an NVIDIA GeForce 7900 GT processor and 256 MB of memory.

nji 50 150 300

Grid of simulation fields 128 × 128 59.64 58.01 29.62

256 × 256 15.03 14.99 11.99

512 × 512 4.60 3.99 3.32

To verify the effectiveness of our proposed GPGPU-based system, in smoke simulation, we present timing performances for solving the Navier-Stokes equations on different sized grids. Fig. 1 shows the snapshots of the smoke simulation sequence. The resolution of each image is 256 × 256, and starting simulation mode is free. The system allows users to change the current simulation mode to either target-driven or free. In (a), users can interactively generate smoke by calculating external force Fui and density fields ρui through mouse movement in free mode. In selecting target-driven mode, current smoke starts moving to a pre-defined target key-frame (in this example, “SIGGRAPH” image was used for the target key-frame) shown from (b) to (d). When users choose free mode again, the driving force Fdf no longer affects simulation operation and users are able to apply their interaction to the current simulation field shown in (e). The status of target-driven mode re-activated without any user interaction is shown in (f) and (g). We can see that the image quality of (g) is almost identical to that of (d). When the mode is switched to target-driven after running in free mode for a few of seconds, the situation where smoke gathers around the target key-frame is shown in (h). Table 1 shows the simulation speeds on different simulation grids where 256 × 256 of fixed display window resolution is used. The system can run smoke simulation at about 12 ∼ 15 frames per second, where both the grid of simulation fields and the resolution of display windows are 256 × 256.

4

Concluding Remarks

Interactive applications, such as computer games, demand realism, but cannot afford to sacrifice speed to achieve this requirement. We developed usercontrollable smoke simulation techniques based on GPGPU computations with these requirements as consideration. This permits the user to select and change the behavior of features as the simulation progresses. Users can create and modify density and flow elements of Navier-Stokes simulation, through a graphic user interface. This system also provides an interactive approach to simulate the smoke towards a special target state.

User-Controllable GPGPU-Based Target-Driven Smoke Simulation

29

Acknowledgements This work was supported by the Korea Research Foundation Grant funded by the Korean Government (MOEHRD) (KRF-2006-331-D00497) and Seoul R&BD Program (10672).

References 1. Fattal, R., Lischinski, D.: Target-driven smoke animation. Transactions on Graphics, Vol. 23, No. 3. ACM (2004) 441–448 2. Treuille, A., McNamara, A., Popovic, Z., Stam, J.: Keyframe control of smoke simulation. Transactions on Graphics, Vol. 22, No. 3. ACM (2003) 716–723

Variable Relaxation Solve for Nonlinear Thermal Conduction Jin Chen Princeton Plasma Physics Laboratory, Princeton, NJ, USA [email protected]

Abstract. Efficient and robust nonlinear solvers, based on Variable Relaxation, is developed to solve nonlinear anisotropic thermal conduction arising from fusion plasma simulations. By adding first and/or second order time derivatives to the system, this type of methods advances corresponding time-dependent nonlinear systems to steady state, which is the solution to be sought. In this process, only the stiffness matrix itself is involved so that the numerical complexity and errors can be greatly reduced. In fact, this work is an extension of implementing efficient linear solvers for fusion simulation on Cray X1E. Two schemes are derived in this work, first and second order Variable Relaxations. Four factors are observed to be critical for efficiency and preservation of solution’s symmetric structure arising from periodic boundary condition: mesh scales, initialization, variable time step, and nonlinear stiffness matrix computation. First finer mesh scale should be taken in strong transport direction; Next the system is carefully initialized by the solution with linear conductivity; Third, time step and relaxation factor are vertex-based varied and optimized at each time step; Finally, the nonlinear stiffness matrix is updated by just scaling corresponding linear one with the vector generated from nonlinear thermal conductivity.

1

Introduction

In plasma physics modeling[1], the steady state of nonlinear anisotropic thermal conduction can be modeled by the following nonlinear elliptic equation ∂ ∂T ∂T ∂ (κx )+ (κy )=s ∂x ∂x ∂y ∂y

(1)

on a 2D rectangular domain ABCD: [0, Lx ]×[0, Ly ] with four vertexes at A(0, 0), B(0, Lx ), C(Lx , Ly ), and D(0, Ly ). Lx < Ly . The coordinate is given in Cartesian (x, y) system. The magnetic field is directed in the y direction, and accordingly we can set κx = 1 and κy as an nonlinear function of the temperature T , parallel to magnetic field line. Therefore we can omit κx and denote κy by κ to make its meaning more clear. The periodic boundary condition is set on edges AD and 

This work is supported by DOE contract DE-AC02-76CH03073.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 30–37, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Variable Relaxation Solve for Nonlinear Thermal Conduction

31

BC, and Dirichlet boundary conditions are set on edges AB and CD. This setup allows us to separate the effects of grid misalignment from the boundary effects. The upper boundary, CD, represent the material surface where the temperature is low, and the boundary condition there is TCD = 1. At the lower boundary, AB, the inflow boundary condition is TAB (x) = 10 + 40e(−|x−Lx/2|) . Finite element discretization[2] generates the following nonlinear system (Sxx + Syy (T ))T = M s.

(2)

M is the mass matrix. Sxx and Syy (T ) are the stiffness matrices contributed 2 ∂ (κ ∂T by operator ∂∂xT2 and ∂y ∂y ), respectively. T is the temperature profile to be solved. When κ is linear, Syy (T ) reduced to κ Syy . Newton-Krylov method can be used to solve system (2). But usually it is quite expensive to update Jacobian at each iteration. Although the Jacobian-free variation[3][4] is more efficient, information of the Jacobian is still needed to form the preconditioner and preconditioning is expensive. In this work we present an alternative way, Variable Relaxation[5], to solve the nonlinear system (1). This is a class of iterative methods which solve the elliptic equations by adding first and/or second order time derivative terms to eq.(1) to convert it to nonlinear parabolic or hyperbolic equation and then marching the system to steady state. In this marching process, only the nonlinear stiffness matrix Syy (T ) itself is involved and needs to be updated regularly. We have been using this type of idea on Cray X1E to design efficient linear elliptic solvers for M3D code[6]. Although It takes longer to converge, each iteration is much cheaper than other iterative solvers[7] so that it still wins on vector architecture machines. The nonlinear iteration can be completed in two steps: Step 1: solve eq.(1) with linear conductivity 100 ≤ κ ≤ 109 . Step 2: solve eq.(1) with nonlinear conductivity κ = T 5/2 . The solution from ”Step 1” is used as an initial guess for ”Step 2”. Experiments will show that this is a very powerful strategy to accelerate convergence. We will also demonstrate how to choose artificial time step from CFL condition and relaxation factor from dispersion relation to achieve optimization. An efficient way to generate the stiffness matrix is also to be discussed in order to preserve the symmetry structure of the solution as a result of periodic boundary condition.

2

First Order Relaxation and Numerical Schemes

The so called first order relaxation is obtained by adding a first order time derivative term to eq. (1) ∂2T ∂ ∂u ∂T = (κ ). + ∂t ∂x2 ∂y ∂y

(3)

32

J. Chen

Discretizing it in temporal direction by finite difference and spatial directions as in system (2), we have (

1 1 M − θSnon )T k+1 = [ M + (1 − θ)Snon )]T k − M s. δt δt

(4)

0 ≤ θ ≤ 1. When θ = 0, the system is fully explicit; when θ = 1, the system is fully implicit; when θ = 12 , the system is stable and has smallest truncation error as well. Snon = Sxx + Syy (T ). δt is the artificial time step which should be chosen to be small enough to make the scheme stable and big enough to allow the system approach steady state quickly. According to CFL condition, δt is related to mesh scales δx in x direction and δy in y direction by δt =

1 2

1 δx2

1 δxδy = 4 + κ δy12

δy δx

2 δxδy ¯ δt. ≡ δx 4 + κ δy

(5)

¯ is symmetric in (δx, δy) and gets maximized at Obviously, when κ = 1, δt ¯ with respect to δx and δy δx = δy. More can be derived if we different δt 2

δy 1 δy ¯ − δx 2 + κ δy ∂ δt 1 δx2 − κ = −2 δy = 2 , δy 2 2 ∂δx δy ( δx ( δx + κ δx + κ δx δy ) δy ) 2

1 δx δy ¯ ∂ δt δx κ − δx2 δx − κ δy 2 = −2 δy = 2 . δx 2 2 ∂δy δy 2 ( δy ( δx + κ δx δy ) δx + κ δy ) ¯

¯

∂ δt ∂ δt When κ > 1, most likely we will have ∂δx < 0 and ∂δy > 0. This suggests that δx should be taken as large as possible, while δy as small as possible. The convergence of scheme (4) can be analyzed in the following way. Given nπy the form of transient solution of eq.(3) as u˜ = e−γt sin mπx Lx sin Ly , the operator ∂2 ∂x2

2

2

∂ ∂ n + ∂y (κ ∂y ) has eigenvalues λmn = π 2 ( m L2x + κ L2y ). m and n are the mode numbers in x and y directions, respectively. Then the decaying rate is −λ11 and the corresponding decaying time can be found by

t=

1 1 = 2 λ11 π

1 L2x

1 . + κ L12 y

The number of iterations needed for convergence can be predicted by Nits

2 t = 2 ≡ δt π

1 δx2 1 L2x

+ κ δy12 + κ L12

y

2 = 2 π

Nx2 L2x 1 L2x

N2

+ κ L2y y

+ κ L12

y

When κ → ∞ Nits →

2 2 1 Ny Ny ≈ (Nx Ny ) ≡ c(Nx Ny ). 2 π 5 Nx

.

Variable Relaxation Solve for Nonlinear Thermal Conduction

33

(Nx Ny ) is the number of unknowns. After some experiments, we found the optimized coefficient should be c = 0.64 for the problem we are studying. Also from the following expression we found the number of iterations increases as κ gets larger 2 (Ny2 − Nx2 ) dNits = 2 Ly >0 dκ π ( + κ Lx )2 Lx

Ly

as long as δy ≤ δx.

3

Second Order Relaxation and Numerical Schemes

Besides the addition of the first order derivative term in eq. (3), the second order relaxation is obtained by adding a relaxation factor, τ , and a second order time derivative term to eq. (1) ∂2T ∂T ∂ 2 u 2 ∂u ∂ = (κ ). + + 2 2 ∂t τ ∂t ∂x ∂y ∂y

(6)

Again it can be discretized and rearranged as k+1 [(1 + δt = τ )M − θSnon ]T k−1 2 k 2 )M T + [2M + δt (1 − θ)S −(1 − δt non ]T − δt M s. τ

(7)

The CFL condition can be expressed as δt2 ( δ12 + κ δ12 ) ≤ 1. Therefore, x

δt ≤ 

1 1 δx2

= 

+ κ δ12

y

√ δxδy δy δx

+ κ δx δy

y

√ √ δxδy 2  = √ . δy 2 + κ δx δx

(8)

δy

The relaxation factor can be found again by looking for the transient solution of eq.(6). The decay rates satisfy γ 2 − τ2 γ + λmn = 0, or γ = τ1 ± ( τ12 − λmn )1/2 . For optimal damping, we choose τ 2 = τ= π



1 1 L2x

+ κ L12

y

1 λ11

L2

2

x

y

π = 1/[( L2y + κ ) L 2 ], i.e.,

 √ Lx Ly 2  τ= √ Ly 2π + κ L x Lx

(9)

Ly

and the number of iterations for convergence can be predicted by Nits ≡

 = π1 

τ δt

1 δx2 1 L2 x

+κ δ12

y

+κ L12

y

 = π1 

N2

2 Nx L2 x

+κ L2y

1 L2 x

+κ L12

y

When κ → ∞ Nits

1 →( π

y



√ 25 2 2 1 9 Nx +κ Ny = π √ = 25 9

+κ

 1 π

25 Nx 9 Ny

√ 25

 Ny  ) Nx Ny ≡ c Nx Ny . Nx

9

N

y +κ Nx

+κ

 Nx Ny

34

J. Chen

Experiments show that the optimal coefficient would be c = 0.6. The number of iteration increases as the conductivity κ increases. This can be understood from the following expression.   2 + κ N2 − 3N 3Nx2 + κ Ny2  y y dNits 1 = > 0. dκ π 3 + κ

4

Variable Relaxations

When κ is an nonlinear function of T , κ changes as Tijk changes at every vertex ij and every time step k. Therefore, time step and relaxation factor changes as well. This is why the name ”Variable” is given. From now on, schemes (4) is called VR(4), scheme (7) is called VR(7), and κ is rewritten as κkij in nonlinear case. From the analysis given in the previous two sections, we have δtkij =

1 2

1 δx2

δxδy 1 1 = k 4 + κij δy2

for VR(4) and δtkij

≤ 



1 1 δx2

+ κkij δ12

y

τijk

=

δy δx

δxδy + κkij δx δy

2 δy δx

(10)

+ κkij δx δy

√ √ δxδy 2  = √ δy 2 + κk δx

 √ Lx Ly 2  = √ L y 2π + κk Lx

Lx ij Ly

(11)

δx ij δy

(12)

for VR(7).

5

Numerical Issues

In practical application due to nonuniform meshes and nonlinearity of the problem, δt and the damping factor τ are modified by scaling factors tscale and τscale . The optimal δt and τ in both cases can be found by tuning these two parameters. This is summarized in the following table: VR(4) for linear problem δt =

δxδy 4

2 δy δx δx +κ δy

· tscale

VR(7) for linear problem √ √ δxδy  2 √ δt = · tscale δy δx 2 δx +κ δy √ √ Lx Ly τ = √2π  L 2 · τscale y Lx

x +κ L Ly

VR(4) for nonlinear problem VR(7)√for nonlinear problem √ δxδy  2 δtkij = √ · tscale δy k δx 2 δx +κij δy √ δxδy √ δtkij = 4 δy +κ2k δx · tscale k Lx Ly ij δy δx τij = √2π  L 2 · τscale y Lx

Lx +κk ij Ly

Variable Relaxation Solve for Nonlinear Thermal Conduction δxδy 4

is the stability criterion for VR(4) when κ = 1.

extra term if κ is larger than one or nonlinear. for VR(7) when κ = 1. 



2

or 



δy

δx √ δxδy √ 2

2 +κ δx δy

or

2 δy k δx δx +κij δy

35

is the

is the stability criterion

2

is the extra term if κ is larger √ Lx Ly than one or nonlinear. For the relaxation factor τ , √2π is the criterion for

VR(7) when κ = 1 and

δy δx δx +κ δy



√ 2

Ly Lx

x +κ L Ly

or

δy k δx δx +κij δy





Ly Lx

2

Lx +κk ij Ly

is the extra term when κ is

larger than one or nonlinear. δx and δy are chosen based on the guidelines discussed in the previous sections so that as an example we have Nx = (16 − 1) ∗ 2 + 1 is 3 times less than Ny = (51 − 1) ∗ 2 + 1. Nx and Ny are the number of corresponding grid points in x and y directions. In this case VR(4) converged in 29708 number of iterations at optimal tscale = 0.174; while VR(7) converged in 1308 number of iterations at optimal tscale = 0.41, τscale = 0.87. From here we can say that VR(7) is more than 20 times faster than VR(4). Hence from now on we will only use VR(7). Although iteration numbers seems to be large, each iteration is very cheap even compared to JFNK which requires preconditioning. Next let’s study the impact of initializing on convergence. As mentioned before, the nonlinear process can be initialized by the solution from the linear system with constant κ . Given the linear solution with different size of κ , the number of iterations for the nonlinear system to reach steady state is given in the following table. We found as long as the linear solution has κ ≥ 2, the nonlinear convergence doesn’t have much difference. It only diverges when a guess has κ = 1. κ 1 2 3 4 5 6 7,8,9,101 ∼ 109 Nits diverge 1313 1310 1309 1309 1309 1308 The marching process is even accelerated by varying δt and τ at each vertex ij and every time step k. We found the iteration won’t even converge if uniform δt and τ are used. Finally we give an efficient approach to update the nonlinear stiffness matrix Syy (T ) at each time step. The numerical integration has to be carefully chosen in order to keep the symmetric structure as a result of periodic boundary condition. Generally   ∂Ni ∂Nj dσ κ Syy (T ) = − ∂y ∂y where Ni and Nj are the ith and jth base functions in finite element space. On each triangle, assuming n is the index running through all of the collocation points, then one way to formulate Syy (T ) at kth time step would be ij Syy (T ) =

 n

w(n)κk (n)

∂Nj ∂Ni (n) (n)J(n) ∂y ∂y

36

J. Chen

where w(n), κk (n), and J(n) are the corresponding weight, conductivity, and ∂Nj i Jacobian at nth point. ∂N ∂y (n) and ∂y (n) are also valued at these points as well. As a function of T , κk (n) can be found by 



(Tlk )5/2 Nl (n) or

l

[Tlk Nl (n)]5/2

l

where l is the index running through all of the vertexes on each triangle. But experiments show that the symmetric structure is destroyed by the above two formulations. Then we worked out the following formula ij (T ) = κkij Syy



wn

n

∂Nj ∂Ni (n) (n)J(n) ∂y ∂y

which leads to Snon = Sxx + B k Syy where B k is a vector with component Bij = κkij at each vertex given by ij. Therefore, we conclude that the nonlinear stiffness matrix Syy can be updated by just scaling the linear stiffness matrix Syy using nonlinear vector B. This approach not only saves computation complexity, but also preserves the symmetric structure of the periodic solution. The nonlinear solution is shown in Fig. 1 again in (x, y) coordinate system. The linear initial guess with κ = 2 × 104 given in the left plot is applied. 4

5/2

κ||=2 × 10

κ||=T

1

1

5

0.9

5 10 15 20

0.9

25 10

0.8

0.7

15

0.7

0.6

20

0.6

30

0.5

25

35

0.4

0.4 0.3

30

0.1

0.4

0.6

0

0

0.2

0.4

40

0.2

0.1

40

45

45

40 0

0.2

35

35

45

0.2

40

0.3

0

40

0.5

35

30

0.8

0.6

Fig. 1. Nonlinear solution at Nx=31, Ny=101, tscale = 0.41, τscale = 0.87. VR(7) is stable when tscale ≤ 0.41 ; VR(4) is stable when tscale ≤ 0.174.

Variable Relaxation Solve for Nonlinear Thermal Conduction

6

37

Conclusions

As an extension of developing efficient linear elliptic solvers for fusion simulation on Cray X1E, nonlinear solver, based on Variable Relaxation, is constructed by by adding first and/or second order time derivative to the nonlinear elliptic equation and marching the resulting time-dependent PDEs to steady state. Instead of Jacobian, Only the stiffness matrix itself is involved and needs to be updated at each iteration. Two schemes has been given, first and second order Variable Relaxations. four numerical issues has been fully discussed: The mesh scale ratio, nonlinear process initialization, variable time step and relaxation factor, efficient calculation of the nonlinear stiffness matrix. In summary, the mesh needs to be finer in direction with strong conductivity; convergence can be sped up by using the solution from corresponding linear system as an initial guess; time step and relaxation factor has to be varied at each grid point and every time step as well; only the nonlinear vector, used to update the nonlinear stiffness matrix, needs to be updated regularly. Therefore, the only computation consists of renewing δtkij , τijk , and B k at each iteration, and apparently these approaches give an efficient and robust algorithm to solve nonlinear systems.

References 1. W Park et al, Nonlinear simulation studies of tokamaks and STs, Nucl. Fusion 43 (2003) 483. 2. J Chen, S C Jardin, H R Strauss, Solving Anisotropic Transport Equation on Misaligned Grids, LNCS 3516, pp. 1076-1079 (2005). 3. D A Knoll, D E Keyes, Jacobian-free Newton-Krylov methods: a survey of approaches and applications, J comp. Phys. 193(2004) 357-397. 4. A Ern, V Giovangigli, D E Keyes, M D Smooke, Towards polyalgorithmic linear system solvers for nonlinear elliptic problems, SIAM J Sci. Comput. 15(1994) 681703. 5. Y T Feng, On the discrete dynamic nature of the conjugate gradient method, J comp. Phys. 211(2006) 91-98. 6. J Chen, J Breslau, G Fu, S Jardin, W Park, New Applications of Dynamic Relaxation in Advanced Scientific Computing, proceedings of ISICS’06 Conference held at Dalian, China, Aug 15-18, 2006. 7. Y Saad, Iterative Methods for Sparse Linearsystems, PWS Publishing Company, (1996).

A Moving Boundary Wave Run-Up Model Asu İnan1 and Lale Balas2 Department of Civil Engineering, Faculty of Engineering and Architecture, Gazi University, 06570 Ankara Turkey [email protected], [email protected]

Abstract. A numerical model has been developed for the simulation long wave propagation and run-up accounting for the bottom friction. Shallow water continuity and momentum equations are solved numerically by a two time step finite difference method. The upwind/downwind method is applied to the nonlinear advection terms and the continuity equation. Effects of damping and bottom friction on the computations are investigated. Since the equations lose their validity when waves break, wave breaking has been checked at every step of the computations. A point can be either wet or dry at different time levels. A moving boundary description and staggered grid are used to overcome the difficulty of determining wet and dry points. Equations are solved by the finite difference approximations of second and third order accuracy. Furthermore, space filters are applied to prevent parasitic short wave oscillations. Keywords: Finite difference, long wave, run-up, moving boundary, filters.

1 Introduction Nonbreaking long waves induced by tsunami, tide or storm cause catastrophic damages on the coasts because of high run-up levels. The numerical simulation of long wave propagation is an important tool in the damage control of catastrophic long waves. Estimation of the boundary between wet and dry points is a difficult problem in the simulation of wave run-up. During the simulations a computational point can be either wet or dry at different time levels. Therefore a moving boundary description is necessary. Two dimensional nonlinear shallow water equations including bed shear stress were numerically solved by some researchers [1],[2]. Lynett et al. (2002) proposed a moving boundary technique to calculate wave run-up and run-down with depthintegrated equations [3]. An eddy viscosity model was inserted in the numerical model to account breaking phenomena. Kanoglu (2004) focused on initial value problem of the nonlinear evolution and run-up and run-down of long waves on sloping beaches for different initial wave forms [4]. Shermeneva and Shugan (2006) calculated the run-up of long gravity waves on a sloping beach using a high-order Boussinesq model [5]. Wei et al. (2006) simulated the long wave run-up under nonbreaking and breaking conditions with two dimensional well-balanced finite volume methods [6]. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 38–45, 2007. © Springer-Verlag Berlin Heidelberg 2007

A Moving Boundary Wave Run-Up Model

39

In this study, nonlinear long wave equations have been solved numerically to simulate the wave run-up by upwind-downwind method. The effects wave breaking, wetting-drying of boundary points, friction and nonlinear terms on wave run-up, have been investigated.

2 Theory Continuity equation and equation of motion for long waves are given below where x is the propagation direction.

∂u ∂u ∂η ru u +u = −g − ∂t ∂x ∂x D

(1)

∂η ∂ (Du ) =− ∂t ∂x

(2)

where, u, r, D, H, η are horizontal velocity, bed friction factor, total depth, water depth and water elevation above still water level, respectively. Total depth is the sum of water depth and water elevation above the still water level. Two-time level numerical scheme has been used for the solution of the system. Upwind/downwind method has been applied to the nonlinear (advective) terms. The following algorithm is used to check the wet and dry points. Total depth has a positive value at a wet point and it is zero on the boundary[7]. 0.5(Dj-1+Dj) ≥0 0.5(Dj-1+Dj) and ≤, , n ¬∃n1 ∈ S with n < n1 < m n ≥ m

Now we define the discrete tableau for Σ. Let RKi and OKi be the sets of roles and individuals appearing in Ki . A discrete tableau T for Σ within a degree set S is a pair: T = {Ti }, {Eij }, Ti = Oi , Li , Ei , Vi , 1 ≤ i, j ≤ m and i = j, where – – – – –

Oi : a nonempty set of nodes; Li : Oi → 2Mi , Mi = sub(Ki ) × {≥, >, ≤, , ≤, , ≤, 0), , ≤, n (n < 1), ⊥, > , n, , , 1 and C, , ≤, l. The number of intervals T generated from sequences Q and S is T≤k+l−1. Let us assume that when sim_seq(Q, S) ≥ ζ there does not exist any pair of segments (SEGq, SEGs) such that sim_seg(SEGq, SEGs) ≥ ζ for 1≤q≤k, 1≤s≤l. Then,

210

S.-L. Lee and D.-H. Kim

sim _ seq (Q , S ) =
n. Then n is calculated, so that r × r−1 − n × n = 1. Both r−1 and n can be calculated using the extended Euclidian algorithm. 2. Obtain Montgomery representation of a and b This step uses the r generated during the pre-computation step to transform the operands a and b into Montgomery representation. Their Montgomery representations are obtained by calculating a ¯ := a × r mod n and ¯b := b × r mod n.

GPU-Accelerated Montgomery Exponentiation

215

¯ 3. Calculate Montgomery Product x ¯=¯ a×b This step calculates the Montgomery product MonPro(¯ a, ¯b) = x ¯ := a ¯ ׯb×r−1 mod n. The Montgomery product is calculated as follows: (a) t := a ¯ × ¯b (b) m := t × n mod r (c) x := (t + m × n)/r (d) if x ≥ n then set x = x − n Montgomery Exponentiation. Because of the overhead caused by the precomputation and representation transformation steps, the Montgomery method is used for modular exponentiation rather than a single modular multiplication. A common form of the Montgomery exponentiation algorithm, which is described in [10], uses the so-called binary square-and-multiply method to calculate x = ab mod n, where a, b, n are k-bit large integers and n is odd. With |b| being the bit length of operand b, the algorithm consists of the following steps: Use n to pre-compute n and r. Calculate a ¯ := a × r mod n. Calculate x ¯ := 1 × r mod n. For i := |b| - 1 down to 0 do (a) Calculate x ¯ := MonPro(¯ x,¯ x) (b) If the i-th bit of b is set, then calculate x ¯ := MonPro(¯ a,¯ x) 5. Calculate x = MonPro(¯ x,1).

1. 2. 3. 4.

3 3.1

Proposed Algorithm Overview

This section proposes a new, GPU-accelerated exponentiation algorithm based on the Montgomery method introduced in section 2.2. This algorithm, denoted as GPU-MonExp, exploits the parallelism of GPUs by calculating multiple modular exponentiations simultaneously. The exponentiation operands have a fixed bit size, which depends on the output capabilities of the GPU hardware. Like the Montgomery exponentiation algorithm introduced in section 2.2, the GPU-MonExp algorithm depends on the Montgomery product. As a result, this section first describes a GPU-accelerated variant of the Montgomery product denoted as GPU-MonPro, which forms the basis of the GPU-MonExp algorithm. As GPUs have limited support for integer values, the proposed GPU algorithms split large integers into k 24-bit chunks, and store each chunk in a 32-bit floating point value. The first chunk contains the least significant bits and the last chunk contains the most significant bits of the large integer. Hence, the representation of a large integer x is: x = x[0], x[1], ..., x[k]

(1)

216

3.2

S. Fleissner

GPU Montgomery Product(GPU-MonPro)

The GPU-MonPro algorithm utilizes the GPU to calculate c Montgomery products in parallel: (2) x¯i := a¯i × b¯i × r−1 mod ni , 1 ≤ i ≤ c The GPU-MonPro algorithm uses large integers with a pre-defined, fixed bit size. Because of the fixed bit size, the maximum size of the ni operands is known, and by considering that r > n and r = 2z for some z, the value of the operand r can be pre-defined as well in order to simplify calculations. As graphics processing units do not provide any efficient instructions for performing bitwise operations, the GPU-MonPro algorithm uses an operand r that is a power of 256. Thus, multiplication and division operations by r can be implemented via byte shifting, which can be performed efficiently by GPUs. Since the large integers used by the algorithms in this chapter consist of k 24-bit chunks, the value of r is pre-defined as r = 2563k . As r is pre-defined, the input values for the GPU-MonPro algorithm are a¯i , b¯i , ni , ni . These input values are supplied by the GPU-MonExp algorithm described in section 3.3. The GPU-MonPro algorithm consists of two steps: Texture preparation and calculation of the Montgomery product. Step 1: Texture preparation [Main Processor]. The GPU-MonPro algorithm uses multiple two-dimensional input and output textures in RGBA color format. An RGBA texel (texture element) consists of four 32-bit floating point values and can thus be used to encode four 24-bit chunks (96 bit) of a large integer. The algorithm uses four types of input textures corresponding to the four types of operands: tex-¯ a, tex-¯b, tex-n, and tex-n . Assuming input textures with a dimension of w × h to calculate c = w × h Montgomery products, the GPU-MonPro algorithm uses the following approach to store the operands a¯i , b¯i , ni , ni in the input textures: 1. For each 0 ≤ x < w, 0 ≤ y < h do: 2. i := y × w + x a texture(s) 3. Store a ¯i in the tex-¯ (a) tex-¯ a[0] (x,y) = a ¯i [0, 1, 2, 3] 1 (b) tex-¯ a[1] (x,y) = a ¯i [4, 5, 6, 7] (c) ... (d) tex-¯ a[k/4] (x,y) = a ¯i [k − 4, k − 3, k − 2, k − 1] 4. Store ¯bi in the tex-¯b texture(s) (a) tex-¯b[0] (x,y) = ¯bi [0, 1, 2, 3] (b) ... (c) tex-¯b[k/4] (x,y) = ¯bi [k − 4, k − 3, k − 2, k − 1] 5. Store ni in the tex-n texture(s) (a) tex-n[0] (x,y) = ni [0, 1, 2, 3] (b) ... (c) tex-n[k/4] (x,y) = ni [k − 4, k − 3, k − 2, k − 1] 1

The term a ¯i [0, 1, 2, 3] is an abbreviation for the four values a ¯i [0], a ¯i [1], a ¯i [2], a ¯i [3].

GPU-Accelerated Montgomery Exponentiation

217

6. Store ni in the tex-n texture(s) (a) tex-n[0] (x,y) = ni [0, 1, 2, 3] (b) ... (c) tex-n[k/4] (x,y) = ni [k − 4, k − 3, k − 2, k − 1] Considering a large integer as k 24-bit chunks, the total number of textures required for storing the operands a¯i , b¯i , ni , ni is k4 4 = k. (There are four operands and each texel can store four 24-bit chunks). The number of required output textures is k4 . After the input and output textures are prepared, drawing commands are issued in order to invoke the fragment program instances on the GPU. Step 2: Calculation of Montgomery product [GPU]. Each instance of the fragment program on the GPU calculates one modular product. Since the GPU hardware runs multiple instances of the fragment program in parallel, several modular products are calculated at the same time. Apart from the input textures containing the a¯i , b¯i , ni , and ni values, each fragment program instance receives a (X, Y) coordinate pair that indicates which operands a ¯, ¯b, n, and  n should be retrieved from the input textures to calculate the Montgomery product. The algorithm performed by the fragment program instances, which is essentially a standard Montgomery multiplication, is as follows: 1. Use the X and Y coordinates to obtain the four operands a¯j , b¯j , nj , and nj for some specific j, with 1 ≤ j ≤ c. 2. Calculate t := a¯j × b¯j . Because the maximum bit size of the operands is pre-defined, multiplication can be implemented efficiently on GPUs using vector and matrix operations, which are capable of calculating multiple partial products in parallel. 3. Calculate m := t × nj mod r. Since r is a multiple of 256, the reduction by modulo r is achieved by byte shifting. 4. Calculate x ¯ := (t + m × nj )/r. By using the vector and matrix operations of the GPU, addition can be implemented efficiently, since partial sums can be calculated in parallel. The division by r is achieved by byte shifting. 5. Output x ¯, which is automatically diverted and stored in the output textures. 3.3

GPU Montgomery Exponentiation (GPU-MonExp)

The GPU-MonExp algorithm calculates c modular exponentiations in parallel: xi = abi mod ni , 1 ≤ i ≤ c

(3)

Each of the c exponentiations uses the same b, but different values for each ai and ni . With |b| denoting the fixed bit size of operand b, the GPU-MonExp algorithm executes the following steps:

218

S. Fleissner

1. Execute the following loop on the main processor: For i := 1 to c do (a) Use ni to pre-compute ni . (b) Calculate x¯i := 1 × r mod ni . (c) Calculate a¯i := ai × r mod ni . 2. Execute the following loop on the main processor: For l := |b| − 1 down to 0 do (a) Invoke GPU-MonPro to calculate x¯i := x¯i × x¯i × r−1 mod ni on the GPU in parallel. (1 ≤ i ≤ c) (b) If the l-th bit of b is set, then invoke GPU-MonPro to calculate x¯i := a¯i × x¯i × r−1 mod ni on the GPU in parallel. (1 ≤ i ≤ c) 3. Invoke GPU-MonPro to calculate the final results xi = x¯i × 1 × r−1 mod ni on the GPU in parallel. (1 ≤ i ≤ c) Step 1 Details. The pre-computation loop, which is executed on the main processor, calculates a corresponding ni for each ni using the extended Euclidian algorithm. If all modular products to be calculated use the same n, then only one n is computed, since n only depends on n and not on the multiplicand and multiplier. Apart from the ni values, the loop determines the initial values for all x¯i and the Montgomery representations of all ai . Step 2 Details. The main loop of the GPU-MonExp algorithm is run on the main processor and uses the square-and-multiply approach to calculate the modular exponentiations. The main loop first invokes GPU-MonPro with the x¯i values as texture parameters in order to calculate the squares x¯i × x¯i × r mod ni on the GPU in parallel. Depending on whether the current bit of operand b is set, the algorithm invokes GPU-MonPro again with x¯i and a¯i as texture parameters to calculate the Montgomery products x¯i × a¯i × r mod ni on the GPU in parallel. After the main loop completes, the final results xi = abi mod ni are transferred back to the main processor.

4 4.1

Algorithm Evaluation Performance Test Overview

This section introduces and analyzes the results of performance tests, which were conducted in order to evaluate the potential of the proposed GPU-MonExp algorithm. In order to obtain representative test data, three different hardware configurations were used to run the performance tests. The details of these three test systems, which are denoted as system A, B, and C, are shown in table 1. The GPU-MonExp performance test works as follows. As a first step, the implementation of the GPU-MonExp algorithm is run with random 192-bit operands for 1 to 100000 exponentiations. The execution time TGP U is measured and recorded. Then an implementation of the square-and-multiply Montgomery exponentiation algorithm described in section 2.2 is run with the same input

GPU-Accelerated Montgomery Exponentiation

219

Table 1. Test Systems System Processor/Memory/Graphics Bus GPU A Intel Pentium 4, 2.66 GHz NVIDIA GeForce 6500 1 GB RAM, PCI-Express 256 MB RAM B Intel Celeron, 2.40 GHz NVIDIA GeForce FX 5900 Ultra 256 MB RAM, AGP 256 MB RAM C Intel Pentium 4, 3.20GHz NVIDIA GeForce 7800 GTX 1 GB RAM, PCI-Express 256 MB RAM

on the computer’s main processor, and its execution time, denoted as TSQM , is recorded as well. Using the execution times of both implementations, the following speedup factor of the GPU-MonExp algorithm is determined: s=

TSQM TGP U

(4)

If the implementation of the GPU-MonExp algorithm runs faster than the square-and-multiply Montgomery exponentiation, then its execution time is shorter and s > 1. The implementations of the GPU-MonExp and the underlying GPU-MonPro algorithm are based on OpenGL, C++, and GLSL (OpenGL Shading Language). The second step of GPU-MonPro described in section 3.2 is implemented as a GLSL fragment program. The parts of the GPU-MonExp algorithm running on the main processor are implemented in C++. 4.2

Test Results

Overall, the test results indicate that the GPU-MonExp algorithm is significantly faster than the square-and-multiply Montgomery exponentiation, if multiple results are calculated simultaneously. When a large amount of modular exponentiations is calculated simultaneously, the GPU-MonExp implementation is 136 - 168 times faster. Table 2. GPU-MonExp Speedup Factors Exponentiations System A 1 0.2264 3 1.2462 10 4.0340 50 19.5056 100 33.2646 10000 138.0390 100000 168.9705

System B 0.0783 0.4843 1.2878 6.4544 13.0150 110.9095 136.4484

System C 0.1060 0.8271 2.9767 13.2813 25.6985 138.8840 167.1229

As shown in table 2, GPU-MonExp already achieves a performance gain when 3 to 10 exponentiations are calculated simultaneously. When calculating 100000

220

S. Fleissner

exponentiations simultaneously, the speedup factors are 168.9705 for system A, 136.4484 for system B, and 167.1229 for system C.

5

Conclusions

This paper introduces the concept of using graphics processing units for accelerating Montgomery exponentiation. In particular, a new GPU-accelerated Montgomery exponentiation algorithm, denoted as GPU-MonExp, is proposed, and performance tests show that its implementation runs 136 - 168 times faster than the standard Montgomery exponentiation algorithm. Public key encryption algorithms that are based on elliptic curves defined over prime fields (prime curves) are likely to benefit from the proposed GPU-MonExp algorithm, which can serve as the basis for GPU-accelerated versions of the point doubling, point addition, and double-and-add algorithms. Signcryption schemes based on elliptic curves, such as [11,12], are a possible concrete application for the GPU-MonExp algorithm.

References 1. Manocha, D.: General-purpose computations using graphics processors. Computer 38(8) (August 2005) 85–88 2. Pharr, M., Fernando, R.: GPU Gems 2 : Programming Techniques for HighPerformance Graphics and General-Purpose Computation. Addison-Wesley (2005) 3. Govindaraju, N.K., Raghuvanshi, N., Manocha, D.: Fast and approximate stream mining of quantiles and frequencies using graphics processors. In: SIGMOD ’05, New York, NY, USA, ACM Press (2005) 611–622 4. M. L. Wong, T.T.W., Foka, K.L.: Parallel evolutionary algorithms on graphics processing unit. In: IEEE Congress on Evolutionary Computation 2005. (2005) 2286–2293 5. Cook, D., Ioannidis, J., Keromytis, A., Luck, J.: Cryptographics: Secret key cryptography using graphics cards (2005) 6. Montgomery, P.L.: Modular multiplication without trial division. Mathematics of Computation 44(170) (April 1985) 519–521 7. Gueron, S.: Enhanced montgomery multiplication. In: CHES ’02, London, UK, Springer-Verlag (2003) 46–56 8. Walter, C.D.: Montgomery’s multiplication technique: How to make it smaller and faster. Lecture Notes in Computer Science 1717 (1999) 80–93 9. WU, C.L., LOU, D.C., CHANG, T.J.: An efficient montgomery exponentiation algorithm for cryptographic applications. INFORMATICA 16(3) (2005) 449–468 10. Koc, C.K.: High-speed RSA implementation. Technical report, RSA Laboratories (1994) 11. Zheng, Y., Imai, H.: Efficient signcryption schemes on elliptic curves. In: Proc. of IFIP SEC’98. (1998) 12. Han, Y., Yang, X.: Ecgsc: Elliptic curve based generalized signcryption scheme. Cryptology ePrint Archive, Report 2006/126 (2006)

Hierarchical-Matrix Preconditioners for Parabolic Optimal Control Problems Suely Oliveira and Fang Yang Department of Computer Science, The University of Iowa, Iowa City IA 52242, USA

Abstract. Hierarchical (H)-matrices approximate full or sparse matrices using a hierarchical data sparse format. The corresponding H-matrix arithmetic reduces the time complexity of the approximate H-matrix operators to almost optimal while maintains certain accuracy. In this paper, we represent a scheme to solve the saddle point system arising from the control of parabolic partial differential equations by using H-matrix LUfactors as preconditioners in iterative methods. The experiment shows that the H-matrix preconditioners are effective and speed up the convergence of iterative methods. Keywords: hierarchical matrices, multilevel methods, parabolic optimal control problems.

1

Introduction

Hierarchical-matrices (H-matrices) [6], since their introduction [1,2,3,6], have been applied to various problems, such as integral equations and partial differential equations. The idea of H-matrices is to partition a matrix into a hierarchy of rectangular subblocks and approximate the subblocks by low rank matrices (Rkmatrices). The H-matrix arithmetic [1,3,4] defines operators over the H-matrix format. The fixed-rank H-matrix arithmetic keeps the rank of a Rk-matrix block below a fixed value, whereas the adaptive-rank H-matrix arithmetic adjusts the rank of a Rk-matrix block to maintain certain accuracy in approximation. The operators defined in the H-matrix arithmetic include H-matrix addition, Hmatrix multiplication, H-matrix inversion, H-matrix LU factorization, etc. The computation complexity of these operators are almost optimal O(n loga n). The H-matrix construction for matrices from discretization of partial differential equations depends on the geometric information underlying the problem [3]. The admission conditions, used to determine whether a subblock is approximated by a Rk-matrix, are typically based on Euclidean distances between the supports of the basis functions. For sparse matrices the algebraic approaches [4,11] can be used, which use matrix graphs to convert a sparse matrix to an H-matrix by representing the off-diagonal zero blocks as Rk-matrices of rank 0. Since the H-matrix arithmetic provides cheap operators, it can be used with H-matrix construction approaches to construct preconditioners for iterative methods, such as Generalized Minimal Residual Method (GMRES), to solve systems of linear equations arising from finite element or meshfree discretizations of partial differential equations [4,8,9,10,11]. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 221–228, 2007. c Springer-Verlag Berlin Heidelberg 2007 

222

S. Oliveira and F. Yang

In this paper we consider the finite time linear-quadratic optimal control problems governed by parabolic partial differential equations. To solve these problems, in [12] the parabolic partial differential equations are discretized by finite element methods in space and by θ-scheme in time; the cost function J to be minimized is discretized using midpoint rule for the state variable and using piecewise constant for the control variable in time; Lagrange multipliers are used to enforce the constraints, which result a system of saddle point type; then iterative methods with block preconditioners are used to solve the system. We adapt the process in [12] and use H-matrix preconditioners in iterative methods to solve the system. First we apply algebraic H-matrix construction approaches to represent the system in the H-matrix format; then H-LU factorization in the H-matrix arithmetic is adapted to the block structure of the saddle point system to compute the approximate H-LU factors; at last, these factors are used as preconditioner in iterative methods to compute the approximate solutions. The numerical results show that the H-matrix preconditioned approach is competitive and effective to solve the above optimal control problem. This paper is organized as follows. In Sect. 2 we introduce the optimal control model problem and the discretization process; Section 3 is an introduction to H-matrices; in Sect. 4, we review the algebraic approaches to H-matrix construction; in Sect. 5 we present the scheme to build the H-matrix preconditioners; finally in Sect. 6 we present the numerical results.

2

The Optimal Control Problem

The model problem [12] is to minimize the following quadratic cost function: q r J(z(u), u) := z(v) − z∗ 2L2 (t0 ,tf ;L2 (Ω)) + v2L2 (t0 ,tf ;Ω) 2 2 s + z(v)(tf , x) − z∗ (tf , x)2L2 (Ω) 2 under the constraint of the state equation: ⎧ ⎨ ∂t z + Az = Bv, t ∈ (t0 , tf ) z(t, ∂Ω) = 0 , ⎩ z(t0 , Ω) = 0

(1)

(2)

where the state variable z ∈ Y = H01 (Ω) and the control variable v ∈ U = L2 (t0 , tf ; Ω). B is an operator in L(L2 (t0 , tf ; Ω), L2 (t0 , tf ; Y  )) and A is an uniformly elliptic linear operator from L2 (t0 , tf ; Y ) to L2 (t0 , tf ; Y  ). The state variable z is dependent on v. z∗ is a given target function. 2.1

Discretization in Space

The system is first discretized in space by fixing t. Considering the discrete subspace Yh ∈ Y and Uh ∈ U , then the discretized weak form of (2) is given as: (z˙h (t), ηh ) + (Azh (t), ηh ) = (Buh (t), ηh ),

∀ηh ∈ Yh and t ∈ (t0 , tf ) .

(3)

Hierarchical-Matrix Preconditioners for Parabolic Optimal Control Problems

223

Let{φ1 , φ2 , .., φn } be a basis of Yh and {ψ1 , ψ2 , .., ψm } be a basis of Uh , where m ≤ n. Apply the finite element methods to (3), we obtain the following system of ordinary differential equations: M y˙ + Ay = Bu,

t ∈ (t0 , tf ) .

(4)

Here Ai,j = (Aφj , φi ) is a stiffness matrix, Mi,j = (φj , φi ) and Ri,j = (ψj , ψi ) solution is zh (t, x) = are mass matrices, and Bi,j = (Bψi , φj ). The semi-discrete   i yi (t)φi (x) with control function uh (t, x) = i ui (t)ψi (x). We can apply the analogous spatial discretization to the cost function (1), and obtain:  t0 e(t)T Q(t)e(t) + u(t)T R(t)u(t) dt + e(tf )T C(t)e(tf ), J(y, u) = (5) tf

where e(·) = y(·) − y∗ (·) is the difference between the state variable and the given target function. 2.2

Discretization in Time

After spatial discretization, the original optimal problem is transferred into minimizing (5) under the constraint of n ordinary differential equations (4). θ-scheme is used to discretize the above problem. First the time scale is subdivided into l intervals of length τ = (tf − t0 )/l. Let F0 = M + τ (1 − θ)A and F1 = M − τ θA. The discretization of equation (4) is given by: Ey + N u = f , (6) where

⎤ ⎡ −F1 ⎥ ⎢ .. .. E=⎣ ⎦, . . F0 −F1

⎡ ⎢ N =τ⎣



B ..

⎥ ⎦,

. B

⎤ y(t1 ) ⎥ ⎢ y ≈ ⎣ ... ⎦ , ⎡

etc .

(7)

y(tn )

Then discretize the cost function (5) by using piecewise linear functions to approximate the state variable and piecewise constant functions to approximate the control variable and obtain the following discrete form of (5): J(y, u) = uT Gu + eT Ke,



(8)

where e = y − y∗ and the target trajectory z∗ (t, x) ≈ z∗,h (t, x) = i (y∗ )i (t)φi (x). A Lagrange multiplier vector p is introduced to enforce the constraint of (6), and we have the Lagrangian 1 T (u Gu + eT Ke) + pT (Ey + N u − f ) . (9) 2 To find y, u and p where ∇L(y, u, p) = 0 in (9), we need to solve the following system: ⎤ ⎤⎡ ⎤ ⎡ ⎡ y M y∗ K 0 ET ⎣ 0 G N T ⎦ ⎣u⎦ = ⎣ 0 ⎦ . (10) f E N 0 p L(y, u, p) =

224

3

S. Oliveira and F. Yang

Hierarchical-Matrices

The concept and properties of H-matrices are induced by the index cluster tree TI and the block cluster tree TI×I [6] . In the rest of this paper, #A denotes the number of elements of set A and S(i) denotes the children of node i. 3.1

Index Cluster Tree and Block Cluster Tree

An index cluster tree TI defines a hierarchical partition tree over an index set I = (0, . . . , n − 1). Note that (0, 1, 2) = (0, 2, 1).TI has the following properties: its root is I; any node i ∈ TI either is a leaf or has children S(i); the parent node i = j∈S(i) j and its children are pairwise disjoint. A block cluster tree TI×I is a hierarchical partition tree over the product index set I × I. Given tree TI and an admissibility condition (see below), TI×I can be constructed as follows: its root is I × I; if s × t in TI×I satisfies the admissibility condition, it is an Rk-matrix leaf; else if #s < Ns or #t < Ns , it is a full-matrix leaf; otherwise it is partitioned into subblocks on the next level and its children (subblocks) are defined as S(s × t) = { i × j | i, j ∈ TI and i ∈ S(s), j ∈ S(t) }. A constant Ns ∈ [10, 100] is used to control the size of the smallest blocks. An admissibility condition is used to determine whether a block to be approximated by an Rk-matrix. An example of an admissibility condition is: s × t is admissible if & only if : min(diam(s), diam(t)) ≤ μ dist(s, t),

(11)

where diam(s) denotes the Euclidean diameter of the support set s, and dist(s, t) denotes the Euclidean distance of the support set s and t. The papers [1,2,3] give further details on adapting the admissibility condition to the underlying problem or the cluster tree. Now we can define an H-matrix H induced by TI×I as follows: H shares the same tree structure with TI×I ; the data are stored in the leaves; for each leaf s × t ∈ TI×I , its corresponding block Hs×t is a Rk-matrix, or a full matrix if #s < Ns or #t < Ns . Fig. 1 shows an example of TI , TI×I and the corresponding H-matrix. 0 1 2 3

{0 1 2 3}

{0 1} {2 3}

{0 1 2 3}X{0 1 2 3} {0 1}X{0 1} {0 1}X{2 3}{2 3}X{0 1} {2 3}X{2 3} RK

{0} {1} {2} {3} (a)

RK

{0}X{0} {0}X{1} {1}X{0} {1}X{1} (b)

0 1 2 3

(c)

Fig. 1. (a) is TI , (b) is TI×I and (c) is the corresponding H-matrix. The dark blocks in (c) are Rk-matrix blocks and the white blocks are full matrix blocks.

An m × n matrix M is called an Rk-matrix if rank(M ) ≤ k and it is represented in the form of a matrix product M = AB T , where M is m × n, A is

Hierarchical-Matrix Preconditioners for Parabolic Optimal Control Problems

225

m × k and B is n × k. If M is not of rank k, then a rank k approximation can be computed in O(k 2 (n + m) + k 3 ) time by using a truncated Singular Value Decomposition (SVD) [1,3]. 3.2

H-Matrix Arithmetic

The following is a short summary of the H-matrix arithmetic. A detailed introduction can be found in [1,2]. H-matrix operations perform recursively; therefore it is important to define the corresponding operations on the leaf subblocks, which are either full or Rkmatrices. These operations are approximate as certain operations do not create Rk-matrices (such as adding two Rk-matrices). In such case, a truncation is performed using an SVD to compute a suitable Rk-matrix. For example, the sum of two rank k matrices can be computed by means of a 2k × 2k SVD. The computational complexity of the H-matrix arithmetic strongly depends on the structure of TI×I . Under fairly general assumptions on the block cluster tree TI×I the complexity of H-matrix operators is O(n logα n) [3,6].

4

Algebraic Approaches for Hierarchical-Matrix Construction

Algebraic H-matrix construction approaches can be applied to sparse matrices. These approaches take advantage that most entries of a sparse matrix are zeros. They build H-matrix cluster tree by partitioning a matrix graph either bottomup or top-down. The multilevel clustering based algorithm [11] constructs the cluster tree “bottom-up”, i.e., starts with the leaves and successively clusters them together until the root is reached. Domain decomposition in [4] and bisection are “top-down” algebraic approaches, which start with the root and successively subdivides a cluster into subsets. 4.1

Algebraic Approaches to Construct an Index Cluster Tree

In [11] we propose an H-matrix construction approach based on multilevel clustering methods. To build clusters over the nodes in Gi = (V (Gi ), E(Gi )), an algorithm based on Heavy Edge Matching (HEM) [7] is used. After building the clusters, a coarse graph Gi+1 is constructed: such that for each clus(i) ter Ck ⊂ V (Gi ) there is a node k ∈ V (Gi+1 ); the edge weight wkt of edge ekt ∈ E(Gi+1 ) is the sum of the weights of all the edges, which connect the nodes (i) (i) in cluster Ck to the nodes in cluster Ct in graph Gi . Recursively applying the above coarsening process gives a sequence of coarse graphs G1 , G2 , . . . , Gh . The index cluster tree TI is constructed by making k ∈ V (Gi ) the parent of every (i) s ∈ Ck . The root of TI is the set V (Gh ), which is the parent to all nodes in Gh . In [4], domain decomposition based clustering is used to build a cluster tree TI . Starting from I, a cluster is divided into three sons, i.e. S(c) = {c1 , c2 , c3 } and c = c1 ∪ c2 ∪ c3 , so that the domain-clusters c1 and c2 are disconnected

226

S. Oliveira and F. Yang

and the interface-cluster c3 is connected to both c1 and c2 . Then the domainclusters are successively divided into three subsets, and the interface-clusters are successively divided into two interface-clusters until the size of a cluster is small enough. To build a cluster tree TI based on bisection is straight forward. Starting from a root set I and a set is successive partitioned into two subsets with equal size. This construction approach is suitable for the sparse matrices where the none zero entries are around the diagonal blocks. 4.2

Block Cluster Tree Construction for Algebraic Approaches

The admissibility condition used to build TI×I for the algebraic approaches is defined as follows: a block s × t ∈ TI×I is admissible if and only if s and t are not joined by an edge in the matrix graph; an admissible block corresponds to a zero block and is represented as a Rk-matrix of rank zero; an inadmissible block is partitioned further or represented by a full matrix.

5

Hierarchical-Matrix Preconditioners

The construction of H-matrix preconditioners for a system of saddle point type is based on the block LU factorization. To obtain a relative cheap yet good approximate LU factors, we replace the ordinary matrix operators by the corresponding H-matrix operators[4,8,11]. First the matrix in (10) is converted to an H-matrix. Since the nonzero entries of each subblock are centered around the diagonal blocks, we apply the bisection approach to the submatrix K, G, E and N respectively. Then we obtain the following H-matrix, which is on the left side of the equation (H indicates a block in the H-matrix format): ⎡ ⎤ ⎡ ⎤⎡ ⎤ T KH 0 EH L1H 0 U 1H 0 M 1TH 0 T ⎦ ⎣ 0 GH NH = ⎣ 0 L2H 0 ⎦ ⎣ 0 U 3H M 2TH ⎦ . (12) EH NH 0 0 0 U 3H M 1H M 2H L3H The block cluster tree TI×I of L1H , L2H , M 1H , and M 2H is same as the block cluster tree structure of KH , GH , EH , and NH respectively. The block cluster tree structure of L3H is based on the block tree structure of EH : the block tree of L3H is symmetric; the tree structure of the lower-triangular of L3H is same as the tree structure of the lower-triangular of EH ; the tree structure of the upper-triangular of L3H is the transpose of the tree structure of the lower triangular. L1H and L2H are obtained by apply H-Cholesky factorization to KH and GH : KH = L1H ∗H U 1H and GH = L2H ∗H U 2H . Then use the Hmatrix upper triangular solve, we can get M 1H by solving M 1H U 1H = EH . M 1H have the same block tree as EH . In the same way we can compute M 2H , which has the same block cluster tree structure as NH . At last we construct the block cluster tree for L3H and then apply H-LU factorization to get L3H : L3H U 3H = M 1H ∗H M 1TH +H M 2H ∗H M 2TH .

Hierarchical-Matrix Preconditioners for Parabolic Optimal Control Problems

6

227

Experimental Results

In this section, we present the numerical results of solving the optimal control problem (1) constrained by the following equation: ⎧ ⎨ ∂t z − ∂xx z = v, t ∈ (0, 1), x ∈ (0, 1) z(t, 0) = 0, z(t, 1) = 0 (13) ⎩ z(0, x) = 0, x ∈ [0, 1] with the target function z∗ (t, x) = x(1 − x)e−x . The parameters in the control function J are q = 1, r = 0.0001 and s = 0. Table 1. Time for computing H-LU factors and GMRES iterations n(n1/n2)

L1H L2H M 1H M 2H L3H GMRES number iteration time of GMRES iterations 592(240/112) 0 0 0.01 0 0.01 0 1 2464(992/480) 0.01 0 0.01 0 0.04 0 1 10048(4032/1984) 0.03 0.01 0.13 0.02 0.39 0.04 1 40576(16256/8064) 0.21 0.06 0.84 0.26 4.13 0.32 3 163072(65280/32512) 1.09 0.42 4.23 1.74 25.12 2.66 6

GMRES iteration stops where the original residuals are reduced by the factor of 10−12 . The convergence rate a ia defined as the average decreasing speed of residuals in each iteration. The fixed-rank H-matrix arithmetic is used and we set the rank of each Rk-matrix block to be ≤ 2. The tests are performed on a Dell workstation with AMD64 X2 Dual Core Processors (2GHz) and 3GB memory. Table 1 shows the time to compute the different parts of the H-LU factors and the time of GMRES iterations (in seconds). n is the size of the problem, n1 and n2 is the number of rows of K and G respectively. Based on Table 1, the time to compute L3H contributes the biggest part of the total time to set up the preconditioner. The convergence rates

Times for solving the system

102

10−2

total time setup time

10−4 Convergence rate

101

time (sec.)

gmres iteration time

10−6

100

10−8

10−1 10−10

10−12 3 10

10 4

Problem size

10 5

10 6

10−2 2 10

10 3

10 4 Problem size

10 5

10 6

Fig. 2. (a) The convergence rates of GMRES (b) Total times for solving the system

228

S. Oliveira and F. Yang

Fig. 2-(a) shows the convergence rate of the H-LU preconditioned GMRES and Fig. 2-(b), plotted on a log-log scale, shows the time for building the preconditioners and the time for the GMRES iterations. Based the results, we can see that H-LU speeds up the convergence of GMRES iteration significantly. The problem in our implementation is that the time to compute L3 still consists a significant part of the LU-factorization time. In the future, more work needs to be done to reduce the complexity of computing L3 further. More discussion about H-matrix preconditioners and applications will come in [13].

References 1. B¨ orm, S., Grasedyck, L., Hackbusch, W.: Introduction to hierarchical matrices with applications. Engineering Analysis with Boundary Elements. 27 (2003) 405–422 2. B¨ orm, S., Grasedyck, L., Hackbush, W.: Hierarchical matrices. Lecture Notes No. 21. Max-Planck-Institute for Mathematics in the Sciences, Leipzig (2003) 3. Grasedyck, L., Hackbusch, W.: Construction and Arithmetics of H-matrices. Computing. 70 (2003) 295–334 4. Grasedyck, L. , Kriemann, R. , LeBorne, S.: Parallel Black Box Domain Decomposition Based H-LU Preconditioning. Mathematics of Computation, submitted. (2005) 5. Gravvanis, G.: Explicit approximate inverse preconditioning techniques Archives of Computational Methods in Engeneering. 9 (2002) 6. Hackbusch, W.: A sparse matrix arithmetic based on H-matrices. Part I: Introduction to H-matrices. Computing. 62 (1999) 89–108 7. Karypis, G., Kumar, V.: A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput. 20 (1999) 359–392 8. LeBorne, S.: Hierarchical matrix preconditioners for the Oseen equations. Comput. Vis. Sci. (2007) 9. LeBorne, S., Grasedyck, L.: H-matrix preconditioners in convection-dominated problems. SIAM J. Matrix Anal. Appl. 27 (2006) 1172–1183 10. LeBorne, S., Grasedyck, L., Kriemann, R.: Domain-decomposition based H-LU preconditioners. LNCSE. 55 (2006) 661–668 11. Oliveira, S., Yang, F.: An Algebraic Approach for H-matrix Preconditioners. Computing, submmitted. (2006) 12. Schaerer, C. and Mathew, T. and Sarkis, M.: Block Iterative Algorithms for the Solution of Parabolic Optimal Control Problems. VECPAR. (2006) 13. Yang, F.: H-matrix preconditioners and applications. PhD thesis. the University of Iowa

Searching and Updating Metric Space Databases Using the Parallel EGNAT Mauricio Marin1,2,3 , Roberto Uribe2 , and Ricardo Barrientos2 Yahoo! Research, Santiago, Chile DCC, University of Magallanes, Chile 3 [email protected] 1

2

Abstract. The Evolutionary Geometric Near-neighbor Access Tree (EGNAT) is a recently proposed data structure that is suitable for indexing large collections of complex objects. It allows searching for similar objects represented in metric spaces. The sequential EGNAT has been shown to achieve good performance in high-dimensional metric spaces with properties (not found in others of the same kind) of allowing update operations and efficient use of secondary memory. Thus, for example, it is suitable for indexing large multimedia databases. However, comparing two objects during a search can be a very expensive operation in terms of running time. This paper shows that parallel computing upon clusters of PCs can be a practical solution for reducing running time costs. We describe alternative distributions for the EGNAT index and their respective parallel search/update algorithms and concurrency control mechanism 1 .

1

Introduction

Searching for all objects which are similar to a given query object is a problem that has been widely studied in recent years. For example, a typical query for these applications is the range query which consists on retrieving all objects within a certain distance from a given query object. That is, finding all similar objects to a given object. The solutions are based on the use of a data structure that acts as an index to speed up queries. Applications can be found in voice and image recognition, and data mining problems. Similarity can be modeled as a metric space as stated by the following definitions. Metric space. A metric space is a set X in which a distance function is defined d : X 2 → R, such that ∀ x, y, z ∈ X, 1. d(x, y) ≥ and d(x, y) = 0 iff x = y. 2. d(x, y) = d(y, x). 3. d(x, y) + d(y, z) ≥ (d(x, z) (triangular inequality). 1

This work has been partially funded by FONDECYT project 1060776.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 229–236, 2007. c Springer-Verlag Berlin Heidelberg 2007 

230

M. Marin, R. Uribe, and R. Barrientos

Range query. Given a metric space (X,d), a finite set Y ⊆ X, a query x ∈ X, and a range r ∈ R. The results for query x with range r is the set y ∈ Y , such that d(x, y) ≤ r. The k nearest neighbors: Given a metric space (X,d), a finite set Y ⊆ X, a query x ∈ X and k > 0. The k nearest neighbors of x is the set A in Y where |A| = k and there is no object y ∈ A such as d(y,x). The distance between two database objects in a high-dimensional space can be very expensive to compute and in many cases it is certainly the relevant performance metric to optimize; even over the cost secondary memory operations. For large and complex databases it then becomes crucial to reduce the number of distance calculations in order to achieve reasonable running times. This makes a case for the use of parallelism. Search engines intended to be able to cope with the arrival of multiple query objects per unit time are compelled to using parallel computing techniques in order to reduce query processing running times. In addition, systems containing complex database objects may usually demand the use of metric spaces with high dimension and very large collections of objects may certainly require careful use of secondary memory. The distance function encapsulates the particular features of the application objects which makes the different data structures for searching general purpose strategies. Well-known data structures for metric spaces are BKTree [3], MetricTree [8], GNAT [2], VpTree [10], FQTree [1], MTree [4], SAT [5], Slim-Tree [6]. Some of them are based on clustering and others on pivots. The EGNAT is based on clustering [7]. Most data structures and algorithms for searching in metric-space databases were not devised to be dynamic ones. However, some of them allow insertion operations in an efficient manner once the whole tree has been constructed from an initial set of objects. Deletion operations, however, are particularly complicated because in this strategies the invariant that supports the data structure can be easily broken with a sufficient number of deletions, which makes it necessary to re-construct from scratch the whole tree from the remaining objects. When we consider the use of secondary memory we find in the literature just a few strategies which are able to cope efficiently with this requirement. A wellknow strategy is the M-Tree [4] which has similar performance to the GNAT in terms of number of accesses to disk and overall size of the data structure. In [7] we show that the EGANT has better performance than the M-Tree and GNAT. The EGNAT is able to deliver efficient performance under high dimensional metric spaces and the use of secondary memory with a crucial advantage, namely it is able to handle update operations dynamically. In this paper we propose the parallelization of the EGANT in the context of search engines for multimedia databases in which streams of read-only queries are constantly arriving from users together with update operations for objects in the database. We evaluate alternatives for distributing the EGANT data structure on a set of processors with local memory and propose algorithms for performing searches and updates with proper control of read-write conflicts.

Searching and Updating Metric Space Databases Using the Parallel EGNAT

2

231

The EGNAT Data Structure and Algorithms

The EGNAT is based on the concepts of Voronoi Diagrams and is an extension of the GNAT proposed in [2], which in turn is a generalization of the Generalized Hyperplane Tree (GHT) [8]. Basically the tree is constructed by taking k points selected randomly to divide the space {p1 , p2 , . . . , pk }, where every remaining point is assigned to the closet one among the k points. This is repeated recursively in each sub-tree Dpi . The EGNAT is a tree that contains two types of nodes, namely a node bucket and another gnat. All nodes are initially created as buckets maintaining only the distance to their fathers. This allows a significant reduction in space used in disk and allows good performance in terms a significant reduction of the number of distance evaluations. When a bucket becomes full it evolves from a bucket node to a gnat one by re-inserting all its objects into the newly created gnat node. In the search algorithm described in the following we assume that one is interested in finding all objects at a distance d ≤ r to the query object q. During search it is necessary to determine whether it is a bucket node or a gnat node. If it is a bucket node, we can use the triangular inequality over the center associated with the bucket to avoid direct (and expensive) computation of the distances among the query object and the objects stored in the bucket. This is effected as follows, – Let q be the query object, let p be the center associated with the bucket (i.e., p is a center that has a child that is a bucket node), let si be every object stored in the bucket, and let r be the range value for the search, then if holds Dist(si , p) > Dist(q, p) + r or Dist(si , p) < Dist(q, p) − r , it is certain that the object si is not located within the range of the search. In other case it is necessary to compute the distance between q and si . We have observed on different types of databases that this significantly reduces the total amount of distance calculations performed during searches. For the case in which the node is of type gnat, the search is performed recursively with the standard GNAT method as follows, 1. Assume that we are interested in retrieving all objects with distance d ≤ r to the query object q (range query). Let P be the set of centers of the current node in the search tree. 2. Choose randomly a point p in P , calculate the distance d(q, p). If d(q, p) ≤ r, add p to the output set result. 3. ∀ x ∈ P , if [d(q, p) − r, d(q, p) + r] ∩ range(p, Dx ) is empty, the remove x from P . 4. Repeat steps 2 and 3 until processing all points (objects) in P . 5. For all points pi ∈ P , repeat recursively the search in Dpi .

232

3

M. Marin, R. Uribe, and R. Barrientos

Efficient Parallelization of the EGNAT

We basically propose two things in this section. Firstly, we look for a proper distribution of the tree nodes and based on that we describe how to perform searches in a situation in which many users submit queries to a parallel server by means of putting queries into a receiving broker machine. This broker routes the queries to the parallel server and receive from it the answers to pass on back the results to the users. This is the typical scheme for search engines. In addition, due to the EGNAT structure we employ to build a dynamic index in each processor, the parallel server is able to cope with update operations taking place concurrently with the search operations submitted by the users. Secondly, we propose a very fast concurrency control algorithm which allows search and update operations to take place without producing the potential read/write conflicts arising in high traffic workloads for the server. We claim very fast based on the fact that the proposed concurrency control mechanism does not incur in the overheads introduced by the classical locks or rollback strategies employed by the typical asynchronous message passing model of parallel computation supported by the MPI or PVM libraries. Our proposal is very simple indeed. The broker assigns a unique timestamp to each query and every processor maintains its arriving messages queue organized as a priority queue wherein higher priority means lower timestamp. Every processor performs the computations related to each query in strict priority order. The scheme works because during this process it is guaranteed that no messages are in transit and the processors are periodically barrier synchronized to send departing messages and receive new ones. In practical terms, the only overhead is the maintenance of the priority queue, a cost which should not be significant as we can profit from many really efficient designs proposed for this abstract data type so far. The above described type of computation is the one supported by the bulksynchronous model of parallel computing [9]. People could argue that the need to globally synchronize the processors could be detrimental and that there could be better ways of exploiting parallelism by means of tricks from asynchronous message passing methods. Not the case for the type of application we are dealing with in this paper. Our results show that even on very inefficient communication platforms such a group of PCs connected by a 100MB router switch, we are able to achieve good performance. This because what is really relevant to optimize is the load balance of distance calculations and balance of accesses to secondary memory in every processor. In all cases we have observed that the cost of barrier synchronizing the processors is below 1%. Moreover, the particular way of performing parallel computing and communications allows processors further reduction of overheads by packing together into a large message all messages sent to a given processor. Another significant improvement in efficiency, which leads to super-linear speedups, is the relative increase of the size of disk-cache in every processor as a result of keeping a fraction N/P of the database in the respective secondary memory, where N is the total number of objects stored in the database and P the number of processors.

Searching and Updating Metric Space Databases Using the Parallel EGNAT

233

To show the suitability of the EGNAT data structure for supporting query processing in parallel, we evenly distributed the database among the P processors of a 10-processors cluster of PCs. Queries are processed in batches as we assume an environment in which a high traffic of queries is arriving to the broker machine. The broker routes the queries to the processors in a circular manner. We take batches as we use the BSP model of computing for performing the parallel computation and communication. In the bulk-synchronous parallel (BSP) model of computing [9], any parallel computer (e.g., PC cluster, shared or distributed memory multiprocessors) is seen as composed of a set of P processor-local-memory components which communicate with each other through messages. The computation is organized as a sequence of supersteps. During a superstep, the processors may only perform sequential computations on local data and/or send messages to other processors. The messages are available for processing at their destinations by the next superstep, and each superstep is ended with the barrier synchronization of the processors. The running time results shown below were obtained with three different metric space databases. (a) A 10-dimensional vector space with 100,000 points generated using a Gaussian distribution with average 1 and variance 0.1. The distance function among objects is the Euclidean distance. (b) Spanish dictionary with 86,061 words where the distance between two words is calculated by counting the minimum number of insertions, deletions and replacements of characters in order to make the two words identical. (c) Image collection represented by 100,000 vectors of dimension 15. Searches were performed by selecting uniformly at random 10% of the database objects. For all cases the search of these objects is followed by the insertion of the same objects in a random way. After we searched 10 objects we randomly chose one of them and insert it into the database, and for every 10 objects inserted we delete one of them also selecting it at random. Notice that we repeated the same experiments shown below but without inserting/deleting objects and we observed no significant variation in the total running time. This means that the overheads introduced by the priority queue based approach for concurrency control we propose in this paper has no relevant effects in the total running time. We used two approaches to the parallel processing of batches of queries. In the first case, we assume that a single EGNAT has been constructed considering the whole database. The first levels of the tree are kept duplicated in every processor. The size of this tree is large enough to fit in main memory. Downwards the tree branches or sub-trees are evenly distributed onto the secondary memory of the processors. A query starts at any processor and the sequential algorithm is used for the first levels of the tree. After this copies of the query “travel” to other processors to continue the search in the sub-trees stored in the remote secondary memories. That is queries can be divided in multiple copies to follow the tree paths that may contain valid results. These copies are processed in parallel when are sent to different processors. We call this strategy the global index approach.

234

M. Marin, R. Uribe, and R. Barrientos BD - gauss vectors BD - images BD - Spanish dictionary

10 8 6 4 2 0

12 10 8 6 4 2 0

2

3

4

5 6 7 Number of Processors

8

9

10

2

(a) Distance calculations 1 new query per superstep 12

4

18

BD - Gauss vectors BD - images BD - spanish dictionary

10

3

5 6 7 Number of Processors

8

9

10

9

10

(b) Running time 1 new query per superstep

Time sequential/Time parallel

DE sequential/DE parallel

BD - Gauss vectors BD - images BD - Spanish dictionary

14 Time sequential/Time parallel

DE sequential/DE parallel

12

8 6 4 2

BD - Gauss vectors BD - images BD - Spanish dictionary

16 14 12 10 8 6 4 2

0

0 2

3

4

5 6 7 Number of Processors

8

9

(c) Distance calculations 10 new queries per superstep

10

2

3

4

5 6 7 Number of Processors

8

(d) Running time 10 new queries per superstep

Fig. 1. Results for the global index approach. Figures (a) and (c) show the ratio number of sequential distance evaluations to parallel ones, and figures (b) and (d) show the respective effect in the running times.

Figure 1 shows running time and distance calculation measures for the global index approach against an optimized sequential implementation. The results show reasonable performance for small number of processors but not for large number of processors. This is because performance is mainly affected by the load imbalance observed in the distance calculation process. This cost is significantly more predominant over communication and synchronizations costs. The results for the ratio of distance calculations for the sequential algorithm to the parallel one show that there is a large imbalace in this process. In the following we describe the second case for parallelization which has better load balance. Notice that this case requires more communication because of the need for broadcasting each query to all processors. In the second case, an independent EGNAT is constructed in the piece of database stored in each processor. Queries in this case start at any processor at the beginning of each superstep. The first step in processing any query is to send a copy of it to all processors including itself. At the next superstep the searching

Searching and Updating Metric Space Databases Using the Parallel EGNAT 18 Gauss vectors Images Spanish dictionary

10

Time sequential / Time parallel

D.E. Sequential/D.E. Parallel

12

8 6 4 2 0

16

Gauss vectors Images Spanish dictionary

14 12 10 8 6 4 2 0

2

3

4

5

6

7

8

9

10

2

3

4

Number of processors

5

6

7

8

9

10

9

10

Number of processors

(a) Distance calculations 1 new query per superstep

(b) Running time 1 new query per superstep

12

18 Gauss vectors Images Spanish dictionary

10

Time sequential / Time parallel

D.E. Sequential/D.E. Parallel

235

8 6 4 2 0

16

Gauss vectors Images Spanish dictionary

14 12 10 8 6 4 2 0

2

3

4

5 6 7 Number of processors

8

9

(c) Distance calculations 10 new queries per superstep

10

2

3

4

5 6 7 Number of processors

8

(d) Running time 10 new queries per superstep

Fig. 2. Results for the local index approach. Figures (a) and (c) show the ratio number of sequential distance evaluations to parallel ones, and figures (b) and (d) show the respective effect in the running times.

algorithms is performed in the respective EGNAT and all solutions found are reported to the processor that originated the query. New objects are distributed circularly onto the processors and insertions are performed locally. We call this strategy the local index approach. In the figures 2 we present results for running time and distance calculations for Q=1 and 10 new queries per superstep respectively. The results show that the local index approach has much better load balance and thereby it is able to achieve better speedups. In same cases, this speedup is superlinear because of the secondary memory effect. Notice that even processing batches of one query per processor is good enough to amortize the cost of communication and synchronization. Interestingly, the running times obtained in figure 2 are very similar to the case in which no write operations are performed in the index [7]. This means that the overhead introduced by the concurrency control method is indeed negligible.

236

4

M. Marin, R. Uribe, and R. Barrientos

Conclusions

We have described the efficient parallelization of the EGNAT data structure. As it allows insertions and deletions, we proposed a very efficient way of dealing with concurrent read/write operations upon an EGNAT evenly distributed on the processors. The local index approach is more suitable for this case as the dominant factor in performance is the proper balance of distance calculations taken place in parallel. The results using different databases show that the EGNAT allows an efficient parallelization in practice. The results for running time show that it is feasible to significantly reduce the running time by the inclusion of more processors. This is because a number of distance calculations for a given query can take place in parallel during query processing. We emphasize that for the use of parallel computing to be justified we must put ourselves in a situation of a very high traffic of user queries. The results show that in practice just with a few queries per unit time it is possible to achieve good performance. That is, the combined effect of good load balance in both distance evaluations and accesses to secondary memory across the processors, is sufficient to achieve efficient performance.

References 1. R. Baeza-Yates and W. Cunto and U. Manber and S. Wu. Proximity matching using fixedqueries trees. 5th Combinatorial Pattern Matching (CPM’94), 1994. 2. S. Brin. Near neighbor search in large metric spaces. The 21st VLDB Conference, 1995. 3. W. Burkhard and R. Keller. Some approaches to best-match file searching. Communication of ACM, 1973. 4. P. Ciaccia and M. Patella and P. Zezula. M-tree: An efficient access method for similarity search in metric spaces. The 23st International Conference on VLDB, 1997. 5. G. Navarro and N. Reyes. Fully dynamic spatial approximation trees. In the 9th International Symposium on String Processing and Information Retrieval (SPIRE 2002), pages 254–270, Springer 2002. 6. C. Traina and A. Traina and B. Seeger and C. Faloutsos. Slim-trees: High performance metric trees minimizing overlap between nodes. VII International Conference on Extending Database Technology, 2000. 7. R. Uribe, G. Navarro, R. Barrientos, M. Marin, An index data structure for searching in metric space databases. International Conference on Computational Science (ICCS 2006), LNCS 3991 (part I) pp. 611-617, (Springer-Verlag), Reading, UK, May 2006. 8. J. Uhlmann. Satisfying general proximity/similarity queries with Metric Trees. Information Processing Letters, 1991. 9. L.G. Valiant. A bridging model for parallel computation. Comm. ACM, 1990. 10. P. Yianilos. Data structures and algoritms for nearest neighbor search in general metric spaces. 4th ACM-SIAM Symposium on Discrete Algorithms, 1993.

An Efficient Algorithm and Its Parallelization for Computing PageRank Jonathan Qiao, Brittany Jones, and Stacy Thrall Converse College Spartanburg, SC 29302, USA {Jonathan.Qiao,Brittany.Jones,Stacy.Thrall}@converse.edu

Abstract. In this paper, an efficient algorithm and its parallelization to compute PageRank are proposed. There are existing algorithms to perform such tasks. However, some algorithms exclude dangling nodes which are an important part and carry important information of the web graph. In this work, we consider dangling nodes as regular web pages without changing the web graph structure and therefore fully preserve the information carried by them. This differs from some other algorithms which include dangling nodes but treat them differently from regular pages for the purpose of efficiency. We then give an efficient algorithm with negligible overhead associated with dangling node treatment. Moreover, the treatment poses little difficulty in the parallelization of the algorithm. Keywords: PageRank, power method, dangling nodes, algorithm.

1

Introduction

A significant amount of research effort has been devoted to hyperlink analysis of the web structure since Sergey Brin, Larry Page brought their innovative work [1] to the information science community in 1998. Brin and Page launched Google at the time when there were already a number of search engines. Google has succeeded mainly because it has a better way of ranking web pages, which is called PageRank by its founders. The original PageRank model is solely based on the hyperlink structure. It considers the hyperlink structure of the web as a digraph. For each dangling node (a page without any out-link), edges are added so that it is connected to all nodes. Based on this, an adjacency matrix can be obtained. It is then to find the eigenvector of the adjacency matrix, whose elements are the ranking values of the corresponding web pages. Due to the size of the web graph, the dominant way in finding the eigenvector is the classic power method [2], which is known for its slow convergence. Because of this, a large amount of work has been done to speed up the PageRank computation since 1998. One factor contributing substantially to the computational cost is the existence of dangling nodes. A single dangling node adds a row full of ones to the adjacency matrix. Including dangling nodes not only produces a significantly Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 237–244, 2007. c Springer-Verlag Berlin Heidelberg 2007 

238

J. Qiao, B. Jones, and S. Thrall

larger matrix, but also makes it no longer sparse. Some existing algorithms exclude dangling nodes from consideration in order to speed up the computation. Those which consider them don’t treat them as regular pages for purpose of efficiency. In either case, the original web structure is changed, and more importantly the information carried by dangling nodes is changed as well. In this paper, we will aim at finding the PageRank vector using the power method with dangling nodes fully included and treated as regular pages. We will then propose an highly efficient algorithm which minimizes the overhead of our treatment of dangling nodes to be negligible in either centralized or distributed approach.

2

PageRank Review

As discussed in Sect. 1, the web structure is treated as a digraph, which gives a N ×N column-wise adjacency matrix, where N is the number of total web pages. Let the matrix be A. Let D be a N × N diagonal matrix with each entry the reciprocal of the sum of the corresponding column of A. Then, AD is a column stochastic Markov matrix. Let v = ADe, where e is a vector of all ones. The ith entry of v tells how many times the ith page gets recommended by all web pages. Introducing the weight of a recommendation made by a page which is inversely proportional to the total number of recommendations made by that page gives the equation v = ADv . (1) Equation (1) suggests that v is the ranking vector as well as an eigenvector of AD with the eigenvalue 1. By nature of the web structure, the Markov matrix AD is not irreducible [2]. This implies that there are more than one independent eigenvectors associated with the eigenvalue 1. Google’s solution to this problem is to replace AD by a convex combination of AD and another stochastic matrix eeT /N ,   1−α T v = αAD + ee v , (2) N where 0 < α < 1 (Google uses α = 0.85). The underlying rational is, by probability 0.85, a random surfer chooses any out-link on a page arbitrarily if the page has out-links; by probability 0.15, or if a page is a dead end, another page is chosen at random from the entire web uniformly. The convex combination in (2) makes the ranking vector v the unique eigenvector associated with the eigenvalue 1 up to scaling and the subdominant eigenvalue α, which is the rate of convergence when the power method is applied [3,4,2]. By normalizing the initial and subsequent vectors, a new power method formulation of (2) is 1−α e , (3) vi+1 = αADvi + N where v0 = (1/N )e because eT v = 1.

An Efficient Algorithm and Its Parallelization for Computing PageRank

3

239

Related Work

Most algorithms so far focus on the cost reduction of each power method iteration, although there have been efforts aiming at reducing the number of iterations, such as [5]. Because the major obstacle of speeding up each iteration lies in the size of data, it is natural to compress the data such that it fits into the main memory, as by [6,7,8]. However, as pointed out by [6], even the best compression scheme requires about .6 bytes per hyperlink, which still results in an exceedingly large space requirement. Others are to design I/O-efficient algorithms, such as [9,10], which can handle any size of data without any particular memory requirement. Also aiming at reducing the cost of each iteration, many other works, such as [11,12], combine some linear algebra techniques to reduce the computational cost of each iteration. Nevertheless, the I/O-efficiency must always be considered. In [9], Haveliwala proposes two algorithms, Naive Algorithm and Blocking Algorithm, and demonstrates high I/O efficiency of the latter when the ranking vector does not fit in the main memory. Applying the computational model in (3), both algorithms use a binary link file and two vectors, the source vector holding the ranking values for the iteration i and the destination vector holding the ranking values for the iteration i + 1. Each diagonal entry of the matrix D is stored in the corresponding row of the link file as the number of out-links of each source page. For purpose of the cost analysis, there are several more parameter families, M , B(·), K and nnz(·). 1. 2. 3. 4.

The The The The

total total total total

number number number number

of of of of

available memory pages will be denoted M . pages of disk/memory resident will be denoted B(·). dangling nodes will be denoted K. links of a web digraph will be denoted nnz(·).

Unless specified, the cost is for a single iteration since we focus on reducing the cost of each iteration in this work. 3.1

Matrix-Vector Multiplication Treatment

Given the computation model in (3), computing a single entry of vi+1 requires reading a row of matrix A and the source vector. Since the link file is sorted by source node id’s and all elements of a row of matrix A are spread out in the link file1 , a less careful implementation would result in one pass for calculating a single value. Preprocessing the link file so that it is sorted by destination node id’s not only needs tremendous effort, but also adds N entries for each dangling node to the link file and thus significantly increases the storage requirement and the I/O cost. To get around of this difficulty, we could use the column version of the matrix-vector multiplication Av = v0 A∗0 + v1 A∗1 + · · · + vn−1 A∗(n−1) , 1

A column of A corresponds to a row in the adjacency list.

(4)

240

J. Qiao, B. Jones, and S. Thrall

where each A∗i is the ith column of A. The computation model in (4) requires one pass of the link file and the source vector for finding the destination vector when the destination vector can be a memory resident. 3.2

The Blocking Algorithm

Let v be the destination vector, v be the source vector and L be the adjacency list of the link file. To handle the memory bottleneck, Blocking Algorithm in [9] partitions v evenly into b blocks so that each vi fits in (M − 2) pages. L is vertically partitioned into b blocks. Note, each Li is then represented by a horizontal block Ai . The partition gives the equation v i = αAi Dv +

1−α e , N

(5)

where i ∈ {0, 1, · · · , b − 1}. Based on (5), computing each vi requires one pass of vi Li , v. Updating the source vector at the end adds another pass of v. Therefore, the cost (referred to as Cblock ) is Cblock =

b−1  i=0

B(Li ) + (b + 1)B(v) +

b−1 

B(vi ) = (b + 2)B(v) + (1 + )B(L) , (6)

i=0

where  is a small positive number due to the partition overhead. The cost in (6) does not scale up linearly when b is not a constant, which can happen given the remarkably fast growth of the web repository.

4

Dangling Nodes Treatment

The cost model in (6) only counts disk I/O’s with an assumption that the inmemory computational cost is negligible compared to the I/O cost. The assumption is justifiable when A is sparse since the in-memory cost is approximately the number of 1 s in A. This can be seen in [9], whose test data set which contains close to 80% of dangling nodes originally is preprocessed to exclude all dangling nodes. Many web pages are by nature dangling nodes, such as, a PDF document, an image, a page of data, etc. In fact, dangling nodes are a increasingly large portion of the web repositories. For some subsets of the web, they are about 80% [2]. Some dangling nodes are highly recommended by many other important pages, simply throwing them away may result in a significant loss of information. This is why some existing works don’t exclude dangling nodes, such as [12], which computes the ranking vector in two stages. In the first stage, dangling nodes are lumped into one; in the second stage, non-dangling nodes are combined into one. The global ranking vector is formed by concatenating two vectors. One of goals of this work is to make improvements over [9] with dangling nodes included. Different from [12] and some other algorithms which include dangling nodes, our approach treats them as regular web pages. It can be easily

An Efficient Algorithm and Its Parallelization for Computing PageRank

241

seen that including dangling nodes does not add any storage overhead since a dangling node does not appear in the link file as a source id. Thus, our approach does not change the I/O cost model in (6). To minimize the in-memory computational overhead imposed by inclusion of dangling nodes, we decompose A into Aˆ + eΔT , where Aˆ is an adjacency matrix of the original web graph (a row full of zeros for a dangling node), Δ is a N × 1 vector with the ith entry 1 if the ith node is a dangling node and 0 otherwise. Substituting the decomposition of A into (3), we have ˆ i + ci + c , vi+1 = α(Aˆ + eΔT )Dvi + c = αADv

(7)

where c = e(1 − α)/N , a constant vector at all iterations, ci is a constant vector at iteration i, whose constant entry is computed by adding all dangling node ranking values at the beginning of each iteration, which is  (Dv)i . (8) out−degree(i)=0

This involves K (the number of dangling nodes) additions and one multiplication. In the implementation, we extract out-degree’s from every Li and save them in a separate file, which has N entries and can be used for computing (8). This also has an advantage of reducing the storage overhead caused by partition since each nonzero out-degree repeatedly appears in every Li . A substantial gain of the in-memory cost can be achieved. Let G be an original web digraph, then, the total number of floating number additions in (3) is C1 = nnz(G) + KN + N = (r + K + 1)N ,

(9)

where r is the average out-links, which varies from 5 to 15 [10]. Based on (7), this cost can be reduced to C2 = nnz(G) + K = rN + K .

(10)

When K is large compared to N , which is often the case for the web data, C1 is Θ(N 2 ) while C2 is Θ(N ).

5

The Parallelization of the Algorithm

The computation model in (7) can be readily parallelized without any extra parallel overhead associated with inclusion of dangling nodes. When applying Blocking Algorithm directly, we may vertically partition the link file L into L0 , L1 , · · · , Lb−1 and distribute them over b nodes in a cluster. Each node holds a partition of L, the source vector v and a partition of v . This gives the computation model at each node vi = αAi Dv + ci ,

(11)

where ci is a constant vector of the size N/b with (1 − α)/N for all entries.

242

J. Qiao, B. Jones, and S. Thrall

The above parallelization does not take the advantage given by the dangling nodes treatment. Combining (5) and (7), the new parallel formulation can be established as ci + ci , (12) v i = αAi Dv + ˆ where i ∈ {0, 1, · · · , b − 1}, ˆ ci is a constant vector at each iteration, and ci is a constant vector at all iterations of the corresponding size as defined in (7). As discussed in Sect. 4, the vector representation of the matrix D is stored in a separated file, which is read b times at each node, the same as a partition of the link file. Computing vi at the ith node is carried out in the same fashion as in the serial implementation. It needs to read Li , v and to write vi to update the source vector. The I/O cost is B(Li ) + B(v) + B(vi ). One advantage of the distributed algorithm can be seen in normalizing the destination vector. A serial implementation needs to read the whole destination vector. In the distributed case, each partition of the destination vector is held in memory at its corresponding node and the normalization can be done concurrently. Therefore, the I/O cost at each node is 1 (13) CI/O = B(Li ) + (1 + )B(v) ≈ B(L)/b + B(v) . b The in-memory computational cost does not have any parallel overhead and is therefore (14) Cin−memory ≈ C2 /b . The communication cost can be a bottleneck. To start a new iteration, every node in the cluster needs to collect one block of the updated source vector from every other node, which is 4N/b bytes. The total communication cost is   b−1 b−1    Ccomm = 4N/b = 4(b − 1)N . (15) i=0

j=0,j=i

By making every pair of all nodes communicate concurrently, the communication cost is reduced to O(N ) since  × Ccomm = Ccomm

2 ≈ 8N . b

(16)

The cost models in (13), (14) and (16) show we could achieve a near linear scale-up and a near linear speed-up provided that the data size, rN , is large compared to N since the communication cost is independent of r.

6 6.1

Experimental Evaluation Experimental Setup

Experiments for the algorithm handling dangling nodes were conducted on Linux platform on a single dedicated machine with a 2.00 GHz Intel(R) Celeron(R)

An Efficient Algorithm and Its Parallelization for Computing PageRank

243

Table 1. Data sets and the cost comparison Name

Pages K/N

r

Links C1

C2 C1 /C2

California 9.7K 48.00% 1.67 16K 26.3s 0.77s 34.2s Stanford 281K 0.06% 8.45 2.4M 58.8s 30.7s 1.9s

CPU. Experiments for the parallel algorithm were conducted on Windows platform on dedicated machines each with a 2.8 GHz Intel Pentium(R)-4 CPU. The memory size in either case is ample in the sense that there is enough physical memory for the OS. The page size of disk access for all experiments is 512 KB. The implementations of both algorithms were in Java. Table 1 shows the basic statistics for the two data sets. The first data set, California, is used solely to test the algorithm of handling dangling nodes. It was obtained from http://www.cs.cornell.edu/Courses/cs685/2002fa/. The second data set, stanford, is used for both algorithms. It was obtained from http://www.stanford.edu/˜sdkamvar/. 6.2

Results for Handling Dangling Nodes

In Tab. 1, C1 represents the cost based on the computation model in (3), C2 represents the cost based on the computation model in (7), which handles dangling nodes using the proposed algorithm. The data set california needs only one disk access due to its small size. Its I/O cost is then negligible. The dangling nodes in this data set are almost a half of the total pages. The speedup obtained by the proposed algorithm, which is the ratio of C2 and C1 , is 34.2. This verifies the two cost models in (9) and (10). The data set stanford is about 11.2M , which results in about 22 disk accesses. Even though the dangling nodes in the data set are only about 0.06%, the speedup obtained by the proposed algorithm is about 1.9. 6.3

Results for the Parallelization of the Algorithm

The parallel implementation uses the data set stanford only. The experiments were conducted on a cluster of different number of nodes. The experimental results in Tab. 2 show we have achieved a near linear speed-up. One reason for Table 2. Parallel running times and the corresponding speed-up’s Number of Nodes Elapsed Time Speed-up

1

2

4

8

47.1s 23.7s 12.1s 6.2s N/A 2.0

3.9 7.6

244

J. Qiao, B. Jones, and S. Thrall

the nice results is that the average out-degree of the experimental data set is 8.45, which makes the data size much larger than the ranking vector size. Therefore, the I/O cost and the in-memory computational cost weigh significantly more than the communication cost.

7

Conclusions and Future Work

In this paper, we have derived an efficient algorithm and its parallelization for computing PageRank with dangling nodes fully preserved. Both the analysis and the experimental results demonstrate that our algorithm has little overhead associated with inclusion of dangling nodes. There are two areas for future work: conducting experiments on the larger data sets and at a larger parallel cluster; and exploring more fully dangling nodes’ impact and their treatment to PagaRank computation.

References 1. Brin, S., Page, L., Motwami, R., Winograd, T.: The pagerank citation ranking: bringing order to the web. Technical report, Computer Science Department, Stanford University (1999) 2. Langville, A.N., Meyer, C.D.: Deeper inside pagerank. Internet Math 1 (2004) 335–380 3. Haveliwala, T.H., Kamvar, S.D.: The second eigenvalue of the google matrix. Technical report, Computer Science Department, Stanford University (2003) 4. Elden, L.: A note on the eigenvalues of the google matrix. Report LiTH-MAT-R04-01 (2003) 5. Kamvar, S., Haveliwala, T., Manning, C., Golub, G.: Extrapolation methods for accelerating pagerank computations. Twelfth International World Wide Web Conference (2003) 6. Randall, K., Stata, R., Wickremesinghe, R., Wiener, J.: The link database: Fast access to graphs of the web. In: the IEEE Data Compression Conference. (March 2002) 7. Adler, M., Mitzenmacher, M.: Towards compressing web graphs. In: the IEEE Data Compression Conference. (March 2001) 8. Raghavan, S., Garcia-Molina, H.: Representing web graphs. In: the 19th IEEE Conference on Data Engineering, Bangalore, India (March 2003) 9. Haveliwala, T.H.: Efficient computation of pagerank. Technical report, Computer Science Department, Stanford University (Oct. 1999) 10. Chen, Y., Gan, Q., Suel, T.: I/o-efficient techniques for computing pagerank. In: Proc. of the 11th International Conference on Information and Knowledge Management. (2002) 11. Kamvar, S., Haveliwala, T., Golub, G.: Adaptive methods for the computation of pagerank. Technical report, Stanford University (2003) 12. Lee, C., Golub, G., Zenios, S.: A fast two-stage algorithm for computing pagerank and its extension. Technical report, Stanford University (2003)

A Query Index for Stream Data Using Interval Skip Lists Exploiting Locality Jun-Ki Min School of Internet-Media Engineering Korea University of Technology and Education Byeongcheon-myeon, Cheonan, Chungnam, Republic of Korea, 330-708 [email protected]

Abstract. To accelerate the query performance, diverse continuous query index schemes have been proposed for stream data processing systems. In general, a stream query contains the range condition. Thus, by using range conditions, the queries are indexed. In this paper, we propose an efficient range query index scheme QUISIS using a modified Interval Skip Lists to accelerate search time. QUISIS utilizes a locality where a value which will arrive in near future is similar to the current value. Keywords: Stream Data, Query Index, Locality.

1

Introduction

Stream data management systems may receive huge number of data items from stream data source while large number of simultaneous long-running queries is registered and active[1,2]. In this case, if all registered queries are invoked whenever a stream data item arrives, the system performance degrades. Therefore, Query indexes are built on registered continuous queries [3]. Upon each stream data arrives, a CQ engine searches for matching queries using these indexes. Existing query indexes simply maintain the all queries based on well known index structures such as the binary search tree and R-tree. However, some application of stream data processing such as stock market and temperature monitoring has a particular property, which is a locality. For example, the temperature in near future will be similar to the current temperature. Therefore, some or all queries which are currently invoked will be reused in the near future. Therefore, the locality of stream data should be considered in the query indexes. In this paper, we present a range query index scheme, called QUISIS (QUery Index for Stream data using Interval Skip lists). Our work is inspired by BMQIndex [4]. To the best of our knowledge, Interval Skip list [5] is the most efficient structure to search intervals containing a given point. Thus, QUISIS is based on the Interval Skip List in contrast to BMQ-Index. Using a temporal interesting list (TIL), QUISIS efficiently finds out the query set which can evaluate a newly arrived data item. The experimental results confirm that QUISIS is more efficient than the existing query index schemes. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 245–252, 2007. c Springer-Verlag Berlin Heidelberg 2007 

246

2

J.-K. Min

Related Work

Some stream data management systems used balanced binary search trees for query indexes [6]. The query index allows to group query conditions combining all selections into a group-filter operator. As shown in Figure 1, a group filter consists of four data structure: a greater-than balanced binary tree, a less-than balanced binary tree, an equality hash-table, and inequality hash table.

Query Conditions q1: R.a ≥ 1 and R.a < 10 q2: R.a > 5 q3: R.a > 7 q4: R.a = 4 q5: R.a = 6

5

e

1 q1

c

7 q2

d

q3 10

1=q1 4=q4 6=q5

Hd

q1

Fig. 1. An example for query indexes using binary search trees

When a data item arrives, balanced binary search trees and hash tables are probed with the value of the tuples. This approach is not appropriate to general range queries which have two bounded conditions. Each bounded condition is indexed in individual binary search tree. Thus, by search of each individual binary tree, unnecessary result may occur. In addition, for query indexes, multi-dimensional data access methods such as R-Tree [7,8] and grid files can be used [9]. In general, the range conditions of queries are overlapped. These R-tree families are not appropriate for range query indexes since many nodes should be traversed due to a large amount of overlap of query conditions. Recently, for the range query indexes, BMQ-Index has been proposed. BMQIndex consists of two data structures: a DMR list, and a stream table. DMR list is a list of DMR nodes. Let Q = {qi } be a set of queries. A DMR node DNj is a tuple . DRj is a matching region (bj−1 , bj ). +DQSet is the set of queries qk such that lk = bj−1 for each selection region (lk , uk ) of Qk . -DQSet is the set of queries qk such that uk = bj−1 for each selection region (lk , uk ) of qk . Figure 2 shows an example of BMQ-Index. A stream table keeps the recently accessed DMR node. Let QSet(t) be a set of queries for data vt at time t and vt be in the DNj , and vt+1 is in the DNh i.e., bj−1 ≤ vt < bj and bh−1 ≤ vt+1 < bh . QSet(t+1) can be derived as follows: h h if j < h, QSet(t + 1) = QSet(t) ∪ [ i=j+1 +DQSeti ] − [ i=j+1 −DQSeti ] h+1 h+1 if j > h, QSet(t + 1) = QSet(t) ∪ [ i=j −DQSeti ] − [ i=j +DQSeti ] if j = h, QSet(t + 1) = QSet(t) (1) The authors of BMQ-Index insist that only a small number of DRN nodes is retrieved, if the forthcoming data is not in the region due to the data locality.

A Query Index for Stream Data

247

stream table

Query Conditions

1

q1: R.a ≥ 1 and R.a < 10 q2: R.a > 5 q3: R.a > 7 q4: R.a = 4 q5: R.a = 6

4 DN1

5 DN2

{+q1}

6 DN3

7 DN4

{+q2}

10

inf

DN5

DN6

{+q3}

{-q1}

DN6 {-q2,-q3}

q1 q2 q3

Fig. 2. An example of a BMQ-Index

However, BMQ-Index has some problem. First, if the forthcoming data is quite different from the current data, many DRN nodes should be retrieved like a linear search fashion. Second, BMQ-Index supports only (l, u) style conditions but does not support general condition such as [l,u] and (l, u]. Thus, as shown in Figure 2, q4 and q5 is not registered in BMQ-Index. In addition, BMQ-Index does not work correctly on boundary conditions. For example, if vt is 5.5, the QSet(t) is {q1,q2}. Then, if vt+1 is 5, QSet(t+1) is also {q1,q2} by the above equation. However, the actual query set for vt+1 is q1.

3

QUISIS

In this section, we present the details of our proposed approach, QUISIS. As mentioned earlier, QUISIS is based on Interval Skip Lists[5]. Thus, we first introduce Interval Skip Lists, and then present our scheme. 3.1

Interval Skip Lists

Interval Skip Lists are similar to linked lists, except each node in the list has one or more forward pointers. The number of forward pointers of the node is called the level of the node. The level of a node is chosen at random. The probability a new node has k level is:  0 fork < 1 P (k) = (2) fork ≥ 1 (1 − p) · pk−1 With p = 1/2, the distribution node levels will allocate approximately 1/2 of the nodes with one forward pointer, 1/4 with two forward pointers, and so on. A node’s forward pointer at level l points to the next node with greater that or equal to l level. In addition, nodes and forward pointers have markers in order to indicate the corresponding intervals. Consider I = (A,B) to be indexed. End points A and B are inserted in the list as nodes. Consider some forward edges from a node with value X to a node with value Y (i.e., X < Y). A marker containing the identifier of I will be placed on edge (X,Y) if and only if the following conditions hold:

248

J.-K. Min

– containment: I contains the interval (X,Y) – maximality: There is no forward pointer in the list corresponding to an interval (X’, Y’) that lies within I and that contains (X,Y). In addition, if a marker for I is placed on an edge, then the nodes of that edge and have a value contained in I will also have a marker (called eqMarker) placed on them for I. The time complexity of Interval Skip Lists is known as O(log N) where N is the number of intervals. Since we present the extended version of the search algorithm in Section 3.2, we omit the formal description of the search algorithm for Interval Skip Lists. 3.2

Behavior of QUISIS

In Figure 3, QUISIS is shown when the current data item is 5.5. Given search key, the search procedure starts from Header in Interval Skip Lists. In stream data environment, a locality such that a data in the near future is similar to the current data occurs. By using this property, we devise the QUISIS based on Interval Skip Lists. 4 3 2 1

Query Conditions

q1

NULL

Header

q1: R.a ≥ 1 and R.a < 10 q2: R.a > 5 q3: R.a > 7 q4: R.a = 4 q5: R.a = 6

TIL

q2 6

q1 1 q1

q2

4

q1

5 q1 q4

q1 q2 q5

q3

q1, q3 7 q1

10 q3

inf q2 q3

Fig. 3. An example of QUISIS

In order to keep the visited edges by the current data item, a temporal interesting list (TIL) is used. TIL records the nodes with level from MAX level to 1 whose forward pointer with level l represents an interval contains the current data item. As shown in Figure 3, the interval [5,6) represented by the node pointed by TIL with level 1 contains the current data item 5.5. In Figure 3, we can interpret Header and N ull such that Header represents the smallest value and N ull represents the largest value. So Header is smaller than -∞ and N ull is greater than ∞ in contrast to the conventional number system. Thus, the intervals represented by the nodes in TIL have the property such that: Property 1. The interval by TIL with level i is contained in the interval by TIL with level i + 1. For example, [5,6) by TIL with level 1 is contained in [4,6) by TIL with level 2 and also is contained in [4,N ull) by TIL with level 3. By using this property, QUISIS reduces the search space efficiently compared with original Interval Skip Lists.

A Query Index for Stream Data

249

QSet //a query set for previous key TIL // a list points to the nodes of QUISIS Procedure findQuery(key ) begin 1. if(TIL[1]->value = key) { 2. for( i = TIL[1]->level; i ≥ 1; i−−) QSet := QSet - TIL[1]->markers[i] 3. QSet := QSet ∪ TIL[1]->eqMarker 4. } else if(TIL[1]->value < key and (TIL[1]->forward[1] = NULL or key < TIL[1]->forward[1]->key)) { 5. QSet := QSet - TIL[1]->eqMarker 6. for(i = TIL[1]->level; i ≥1; i–) QSet := QSet ∪ TIL[1]->markers[i] 7. } else { 8. QSet := QSet - TIL[1]->eqMarkers 9. if(TIL[1]->forward[1] = NULL or key ≥ TIL[1]->forward[1]->value ) { 10. for(i := 1; i ≤ maxLevel ; i++) 11. if(TIL[i]->forward[i] = NULL or key < TIL[i]->forward[i]->value) break 12. else QSet = QSet - TIL[i]->markers[i] 13. } else { 14. for(i = 1; i ≤ maxLevel; i++) 15. if(TIL[i]= Header and key ≥ TIL[i]->value) break 16. else QSet := QSet - TIL[i]->markers[i] 17. } 18. anode := TIL[−−i] 19. while(i ≥ 1) { 20. while(anode->forward[i] = NULL and anode->forward[i]->value le key) anode = anode->forward[i] 21. if(anode = Header and anode->value = key) QSet := QSet ∪ anode->markers[i] 22. else if(anode = Header) QSet := QSet ∪ anode->eqMarker[i] 23. TIL[i] = anode; 24. i:= i-1 25. } 26. } 27. return QSet end

Fig. 4. An algorithm of the event handler for endElement

In order to exploit TIL, we devised the procedure findQuery() using the Property 1. An outline of an implementation of findQuery() is shown in Figure 4. Basically, the behavior of the procedure findQuery() is changed according to the condition of newly arrived data value (i.e., key) and the interval [v1 , u1 ) by TIL with level 1. If key is equal to v1 of the node n pointed by TIL with level 1 (Line 1-3 in Figure 4), all markers on the edges starting from n are removed (Line 2) since QSet may contain queries whose intervals are (v1 , -). Instead, eqMarker of n is added (Line 3) since eqMarker of n contains the queries whose interval contains v1 and the queries in eqMarker are on that edges which are ended or started at n. For example, when the data was 5.5, QSet was {q1,q2} and a new data item 5 arrives, the TIL[1] is the node 5. Thus, {q2} is removed from QSet by Line 2. And since eqMarker of the node 5 is ∅, the final query result is {q1}. If key is in (v1 , u1 )(Line 4-6 in Figure 4), the queries in eqMarker of n are removed since the QSet may contain queries whose intervals are (-, v1 ]. Instead, all markers on the edges starting from n are added. If key is not in (v1 ,u1 ) (Line 7-27 in Figure 4), the procedure looks for the interval [vi , ui ) by TIL with level i which contains key (Line 8-17). This step is separated into two cases: key ≥ u1 (Line 9-12) and key < v1 (Line (Line 13-17). Also, in this step, markers on the edges with level from 1 to i-1 are removed

250

J.-K. Min

from QSet (Line 12 and 16). And then, the procedure gathers queries starting from the node (i.e., anode) whose value is vi (Line 19-25). In this case, since the marker on the edge of anode with level i is already in QSet, level i decreases (Line 18). If the interval represented by a forward pointer of anode with level i does not contain key, a search procedure traverses to the next node pointed by the forward pointer of a node with a level i (Line 21). If the value of anode is equal to key, eqMarker of anode is added (Line 22). Otherwise the marker on the forward pointer is added (Line 21). Then, the anode is set to TIL[i](Line 23) and i is dropped into i − 1(Line 24). The search procedure continues until the level l is to be 1. For example, when the data was 5.5, QSet was {q1,q2} and a new data item 13 arrives, [4, N ull) represented by TIL[3] contains 13. Therefore, the procedure removes the search overhead starting from Header. {q2} and {q1} which are markers of TIL[1] and TIL[2], respectively, are removed from QSet (Line 9-12). Then, the procedure gathers queries starting from the node 4 with level 2 (Line 18). Since [4,6) does not contain 13, the procedure looks for next interval [6, N ull) represented by node 6 with level 2. Since 13 is in [6, N ull) but not equal to 6, a marker q2 is added. And TIL[2] points the node 6. Then, the procedure searches the list from the node 6 with level 1. Since [10, inf) contains 13, a marker {q3} is added. Finally, QSet is {q2, q3}. In aspect of using the data locality, our proposed scheme QUISIS is similar to BMQ-Index. However, since QUISIS is based on Interval Skip Lists, QUISIS is much more efficient than BMQ-Index in general cases. Our experiments demonstrate the efficiency of QUISIS.

4

Experiments

In this section, we show the efficiency of QUISIS compared with the diverse query index techniques: BMQ-Index and Interval Skip Lists. The experiment performed on Windows-XP machine with a Pentium IV-3.0Ghz CPU and 1GB RAM. We evaluated the performance of QUISIS using the synthetic data over various parameters. We implemented a revised version of BMQ-Index which works correctly on the boundary conditions. The default experimental environment is summarized in Table 1. In Table 1, length of query range (W) denotes the average length of query condition normalized by the attribute domain and fluctuation level (FL) denotes the average distance of two consecutive data normalized by the attribute domain. Therefore, as FL decrease, the locality appears severely. Table 1. Table of Symbols Parameter value Attribute domain 1 ∼ 1,000,000 100,000 # of Queries Length of query range (W) 0.1% (= 1,000) 1,000 ∼ 10,000 # of Data 0.01% (= 100) Fluctuation level (FL)

A Query Index for Stream Data

BMQ-Index

Interval Skip Lists

251

QUISIS

1200

time(milliseconds)

1000 800 600 400 200 0 1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

9000

10000

# of data

BMQ-Index

Interval Skip Lists

QUISIS

6000

8000

5000 4500 time(milliseconds)

4000 3500 3000 2500 2000 1500 1000 500 0 1000

2000

3000

4000

5000

7000

# of data

BMQ-Index

Interval Skip Lists

QUISIS

5000 4500 time(milliseconds)

4000 3500 3000 2500 2000 1500 1000 500 0 1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

# of data

Fig. 5. The results with varying the number of data

We empirically performed experiments with varying parameters. However, due to the space limitation, we show only the experimental result when values of FL are 0.01%, 0.1% and 1%. Our proposed index scheme QUISIS shows the best performance except the case when FL is 0.01%. Figure 5-(a), BMQ-Index shows the best performance when FL is 0.01% (i.e., high locality) due to its simple structure. In BMQIndex, if the forthcoming data is different from the current data, many DMR nodes should be retrieved. Therefore, BMQ-Index shows the worst performance when FL is 0.1% (see Figure 5-(b)) and 1% (see Figure 5-(c)). In other words, BMQ-Index only fits on the high locality cases. In contrast to BMQ-Index, QUISIS shows good performance over all cases since QUISIS efficiently retrieves the query set using TIL and removes the overhead searching from Header. The performance of Interval Skip Lists does not affected by FL. As shown in Figure 5-(c), Interval Skip Lists shows the good performance when FL = 1.0%. Particulary, when FL are 0.1% and 1%, Interval Skip Lists is superior to BMQ-Index. Consequently, QUISIS is shown to provide reasonable performance over diverse data locality.

252

5

J.-K. Min

Conclusion

In this paper, we present an efficient scheme for query indexing, called QUISIS which utilizes the data locality. QUISIS is based on Interval Skip Lists. In order to maintain the current locality, TIL (temporal interesting list) is equipped. To show the efficiency of QUISIS, we conducted an extensive experimental study with the synthetic data. The experimental results demonstrate that QUISIS is superior to existing query index schemes.

References 1. Arasu, A., Babcock, B., Babu, S., Datar, M., Ito, K., Motwani, R., Nishizawa, I., Srivastava, U., Thomas, D., Varma, R., Widom, J.: STREAM: The Stanford Stream Data Manager. IEEE Data Engineering Bulletin 26 (2003) 2. Chandrasekaran, S., Cooper, O., Deshpande, A., Franklin, M.J., Hellerstein, J.M., Hong, W., Krishnamurthy, S., Madden, S., Reiss, F., Shah, M.A.: TelegraphCQ: Continuous Dataflow Processing. In: ACM SIGMOD Conference. (2003) 3. Ross, K.A.: Conjunctive selection conditions in main memory. In: PODS Conference. (2002) 4. Lee, J., Lee, Y., Kang, S., Jin, H., Lee, S., Kim, B., Song, J.: BMQ-Index: Shared and Incremental Processing of Border Monitoring Queries over Data Streams. In: International Conference on Mobile Data Management (MDM’06). (2006) 5. Hanson, E.N., Johnson, T.: Selection Predicate Indexing for Active Databases Using Interval Skip Lists. Information Systems 21 (1996) 6. Madden, S., Shah, M.A., Hellerstein, J.M., Raman, V.: Continuously adaptive continuous queries over streams. In: ACM SIGMOD Conference. (2002) 7. Guttman, A.: R-Trees: A Dynamic Index Structure for Spatial Searching. In: ACM SIGMOD Conference. (1984) 8. Brinkhoff, T., Kriegel, H., Scheneider, R., Seeger, B.: The R*-tree: An Efficient and Robust Access Method for Points and Rectangles. In: ACM SIGMOD Conference. (1990) 9. Choi, S., Lee, J., Kim, S.M., Jo, S., Song, J., Lee, Y.J.: Accelerating Database Processing at e-Commerce Sites. In: International Conference on Electronic Commerce and Web Technologies. (2004)

Accelerating XML Structural Matching Using Suffix Bitmaps Feng Shao, Gang Chen, and Jinxiang Dong Dept. of Computer Science, Zhejiang University, Hangzhou, P.R. China [email protected], [email protected], [email protected]

Abstract. With the rapidly increasing popularity of XML as a data format, there is a large demand for efficient techniques in structural matching of XML data. We propose a novel filtering technique to speed up the structural matching of XML data, which is based on an auxiliary data structure called suffix bitmap. The suffix bitmap captures in a packed format the suffix tag name list of the nodes in an XML document. By comparing the respective suffix bitmaps, most of the unmatched subtrees of a document can be skipped efficiently in the course of structural matching process. Using the suffix bitmap filtering, we extend two state-of-the-art structural matching algorithms: namely the traversal matching algorithm and the structural join matching algorithm. The experimental results show that the extended algorithms considerably outperform the original ones. Keywords: XML, Suffix Bitmap, Structural Matching, Filtering.

1 Introduction In the past decade, while XML has become the de facto standard of information representation and exchange over the Internet, efficient XML query processing techniques are still in great demand. The core problem of XML query processing, namely structural matching, still remains to be a great challenge. In this paper, we propose a novel acceleration technique for structural matching of XML documents. Our method utilizes an auxiliary data structure called suffix bitmap, which compresses the suffix tag names list of XML nodes in a packed format. In an XML document tree, each node corresponds to a sub-tree, which is rooted at the node itself. The suffix tag names list of an XML node contains all the distinct tag names in its corresponding sub-tree, which is described in [2]. For ease of implementation and store efficiency, we present a novel data structure called suffix bitmap to compress suffix tag names list. Suffix bitmap contains the non-structural information of XML sub-tree. As bitwise computation can be processed efficiently, we can skip most of the unmatched subtrees using bitwise suffix bitmap comparison. Therefore, the suffix bitmap can be deployed to filter the unmatched subtrees of XML documents. In this paper, we will integrate suffix bitmap filtering into the traversal matching algorithm and the structural join matching algorithm. The experiments show that the extended matching algorithms considerably outperform original algorithms. Also, we present the construction procedure of suffix bitmap with linear time complexity. To Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 253–260, 2007. © Springer-Verlag Berlin Heidelberg 2007

254

F. Shao, G. Chen, and J. Dong

reduce the memory consumption further, we also present the variable-length suffix bitmap. The rest of the paper is organized as follows. Section 2 describes the suffix bitmap and its construction algorithm. Section 3 integrates the suffix bitmap filtering into original matching algorithms. Section 4 compares the extended matching algorithms to the original algorithms. Section 5 lists some related work.

2 Suffix Bitmap 2.1 Global Order of Tag Names Given a XML document, we define the global order of tag names. Each distinct tag name has a global sequence number, which is an incremental positive number and starts at 0. The assigned function is: Definition 1

γ:tagName Æ GSN

GSN is increasing numeric that starts from 0 GSNtagname1 < GSNtagname2 iff tagname1 first appears before tagname2 on XML document import

After global sequence numbers assigning, we get the global order of tag names. We represent the global order relation of tag names as an order set called tag names set. Definition 2 Settagname = { tagname0 , tagname1 , … , tagnamen-1 } The GSN of tagnamei is i

For example, in figure1, tagname1 is bib, tagname2 is book and so on.

Fig. 1. Suffix Bitmap Filtering

2.2 Suffix Bitmap We first give the description of suffix tag names list. In XML document tree, each node n corresponds to a sub-tree SubTreen. The suffix tag names list of node n contains all distinct tag names appeared in SubTreen. Due to the global order of tag

Accelerating XML Structural Matching Using Suffix Bitmaps

255

names, we can represent suffix tag names as a bitmap, called suffix bitmap. We define the suffix bitmap as follows:

For example, in figure1, chapter(1111000) has title, chapter, section, text in its subtree. We attach a suffix bitmap to each node in XML document tree. So the memory consumption of total suffix bitmaps is node_count * m / 8 bytes, where node_count is the number of nodes in XML document. 2.3 Construction of the Suffix Bitmap We give a preorder construction algorithm of the suffix bitmap, which runs on XML document import. The detail description is below: Suffix Bitmap Construction Algorithm 1. void StartDocument() { 2. createStack() 3. } 4. void EndDocument() { 5. destoryStack() 6. } 7. void StartElement(String name) { 8. BSuffixcurr = 1 1 as follows: J0 = ∅; Jn = ϕ(Jn−1 , n), n > 0. Where G is the aggregation operator. Here ϕ(Jn−1 , n) denotes the result of evaluating ϕ(T, k) when the value of T is Jn−1 and the value of k is n. Note that, for each input database D, and the support threshold δ, the sequence {Jn }n≥0 converges. That is, there exists some k for which Jk = Jj for every j > k. Clearly, Jk holds the set of frequent itemsets of D. Thus the frequent itemsets can be defined as the limit of the forgoing sequence. Note that Jk = ϕ(Jk , k + 1), so Jk is also a fixpoint of ϕ(T, k). The relation Jk thereby obtained is denoted by μT (ϕ(T, k)). By definition, μT is an operator that produces a new nested relation (the fixpoint Jk ) when applied to ϕ(T, k). 3.2

Fixpoint Algorithm

We develop an algorithm for computing frequent itemset by using the above defined fixpoint operator. We define a new join operator called sub-join. Definition 2. Let us consider two relations with the same schemes {Item, Count}. The sub-join, r 1sub,k s = {t | ∃u ∈ r, v ∈ s such that u[Item] ⊆     v[Item] ∧∃t such that (u[Item] ⊆ t ⊆ v[Item] ∧ |t | = k), t =< t , v[Count] >}

A Logic-Based Approach to Mining Inductive Databases

273

Here, we treat the result of r 1sub,k s as multiset meaning, as it may produce  two tuples of t with the same support value. Example 1. Given two relations r and s, the result of r 1sub,2 s is shown in Figure 1. r Items Support {a} 0 {b, f } 0 {d, f } 0

s Items Support {a, b, c} 3 {b, c, f } 4 {d, e} 2

r 1sub,2 s Items Support {a, b} 3 {a, c} 3 {b, f } 4

Fig. 1. An example of sub-join

Given a database D = (Item, Support) and support threshold δ, the following fixpoint algorithm computes frequent itemset of D. Algorithm. fixpoint Input: An object-relational database D and support threshold δ. Output: L, the frequent itemsets in D. Method: begin k (D))) L1 := σSupport≥δ ( Item Gsum(Support) SItem for (k = 2; T = ∅; k + +) { S := sub join(Lk−1 , D) T := σSupport≥δ ( Item Gsum(Support) (S) Lk := Lk−1 ∪ T } return Lk ;

end procedure sub join (T: frequent k-itemset; D: database) for each itemset l1 ∈ T , for each itemset l2 ∈ D, c = l1 1sub,k l2 if has inf requent subset (c, T) then delete c else add c to S; return S;

4

procedure has inf requent subset (c: candidate k-itemset, T: frequent (k − 1)-itemsets); for each k − 1-subset s of c if s not ∈ T then return TRUE; return FALSE;

A Logic Query Language

In this section, we present a logic query language to model the association rule mining, naive Bayesian classification and partition-based clustering.

274

H.-C. Liu et al.

Association rule mining We present an operational semantics for association rule mining queries expressed in Datalogcv,¬ program from fixpoint theory. We present a Datalog program as shown below which can compute the frequent itemsets. ← f req(I, C), J ⊂ I, |J| = 1 ← cand(J, C), C > δ ← large(J, C1 ), f req(I, C2 ), x ⊂ I, J ⊂ x, |x| = max(|J|) + 1, 4. T (genid(), x, C) ← T (x, C), ¬has inf requent subset(x) 5. cand(x, sum < C >) ← T (id, x, C) 6. large(x, y) ← cand(x, y), y > δ

1. cand(J, C) 2. large(J, C) 3. T (x, C2 )

The rule 1 generates the set of 1-itemset from the input frequency table. The rule 2 selects the frequent 1-itemset whose support is greater than the threshold. The program performs two kinds of actions, namely, join and prune. In the join component, the rule 3 performs the sub-join operation on the table large generated in the rule 2 and the input frequency table. The prune component (rule 4) employs the Apriori property to remove candidates that have a subset that is not frequent. The test for infrequent subsets is shown in procedure has inf requent subset(x). Datalog system is of set semantics. In the above program, we treat T facts as multisets, i.e., bag semantics, by using system generated id to simulate multiset operation. The rule 5 counts the sum total of all supports corresponding to each candidate item set generated in table T so far. Finally, rule 6 computes the frequent itemsets by selecting the itemsets in the candidate set whose support is greater than the threshold. We now show the program that defines has inf requent subset(x). has inf requent subset(x) ← s ⊂ x, |s| = |x| − 1, ∀y[large(y, C), y = s] Once the frequent itemset table has been generated, we can easily produce all association rules. Naive Bayesian Classification Let us consider a relation r with attributes A1 , ..., An and a class label C. The Naive Bayesian classifier assigns an unknown sample X to the class Ci if and only if P (Ci |X) > P (Cj |X), for 1 ≤ j ≤ m, j = i. We present a datalog program demonstrating that Naive Bayesian classification task can be performed in deductive environment. The program first evaluates the frequencies of the extension of r, each class and each pair of attribute Ai and class. f req r(A1 , ..., An , C, count(∗)) ← r(A1 , ..., An , C) f req class(C, count(∗)) ← r(A1 , ..., An , C) f req Ai class(Ai , C, count(∗)) ← r(A1 , ..., An , C)

A Logic-Based Approach to Mining Inductive Databases

275

Then we obtain the probabilities of P (Ai | C), as follows. P r class(C, p) ← f req r(A1 , ..., An , C, nr ), f req class(C, nc ), p = nc /nr P r A class(A, C, p) ← f req A class(A, C, nA,C ), f req class(C, nc ), p = nA,C /nc Finally, we get the answer predicate Classif ier(x1 , ..., xk , class). ← r(x1 , ..., xk ), P r A class(A, class, p), ∃ti ∈ P r A class, 1 ≤ i ≤ k, x1 = t1 .A,...,xk = tk .A,  t1 .class = ... = tk .class, p = ti .p ← P r(x1 , ..., xk , class, p1 ), P (x1 , ..., xk , class, p) P r class(class, p2), p = p1 × p2 Classif ier(x1 , ..., xk , class) ← P (x1 , ..., xk , class, p), p = max{P.p} P r(x1 , ..., xk , class, p)

Example 2. We wish to predict the class label of an unknown sample using naive Bayesian classification, given a training data. The data samples are described by the attributes age, income, student, and credit-rating. The class label attribute, buys-computer, has two distinct values, namely, YES, NO. The unknown sample we wish to classify is X = (age = ” 0 we obtain ∇x fk (x∗ )w +

i=n  i=1,i=k

(1)∗

Now with

λi

(1)∗

λk

∇x fi (x∗ )

(1)∗

λi

(1)∗

λk

+ ∇x h(x∗ )

λ(2)∗ (1)∗

= 0.

(9)

λk

≥ 0 as the multipliers of the m − 1 inequality constraints in

(8), the goals γi satisfy complementarity by definition since (1)

γi = fi (x) whenever λi (1)∗ λi (1)∗ λk

= 0,

(fi (x) − γi ) = 0, ∀i = k.

(10)

Using the above together with feasibility of x for (8), we obtain that the point  (1)∗ (2)∗ ∗ λi x , (1)∗ , λ(1)∗ satisfies the first order necessary optimality conditions of (8). λk

λk

 

Note that as in the case of weighted-sum method if one uses the original NBI method to get the equivalence, due to the presence of equality constraints F (x)− Φβ − tˆ n = 0, the components of multiplier λ(1)∗ ∈ Rm can be positive or negative. Thus one could obtain negative weights. In such a case the equivalence of NBI and goal programming problem requires additional assumption that the components of λ(1)∗ are of the same sign while no such assumptions are needed in the mNBI method.

5

Conclusions

In this paper, we presented an efficient formulation of the NBI algorithm for getting an even spread of efficient points. This method unlike NBI method does not produce dominated points and is theoretically equivalent to weighted-sum or ε-constraint method This paper also proposes a way to reduce the number of subproblems to be solved for non-convex problems. We compare the mNBI method with other popular methods like weighted-sum method and goal programming methods using Lagrange multipliers. It turned out that mNBI method does not require any unusual assumption compared to relationship of NBI method with weighted-sum method and goal programming method. Lastly we would like to mention that since some other class of methods like the Normal-Constraint Method [9] or the Adaptive Weighted Sum Method [8] use similar line or inclined search based constraint in their sub-problems, the solutions of the sub-problems of these method are also in general not Pareto-optimal and hence the mNBI method presented in this paper is superior to them.

On the Normal Boundary Intersection Method

317

Acknowledgements The author acknowledges the partial financial support by the Gottlieb-Daimlerand Karl Benz-Foundation.

References 1. I. Das and J.E. Dennis. A closer look at drawbacks of minimizing weighted sum of objecties for Pareto set generation in multicriteria optimization problems. Structural Optimization, 14(1):63–69, 1997. 2. I. Das and J.E. Dennis. Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems. SIAM Journal of Optimization, 8(3):631–657, 1998. 3. K. Deb. Multi-objective optimization using evolutionary algorithms. Chichester, UK: Wiley, 2001. 4. M. Ehrgott. Multicriteria Optimization. Berlin: Springer, 2000. 5. H. Eschenauer, J. Koski, and A. Osyczka. Multicriteria Design Optimization. Berlin: Springer-Verlag, 1990. 6. F. W. Gembicki. Performance and Sensitivity Optimization: A Vector Index Approach. PhD thesis, Case Western Reserve University, Cleveland, OH, 1974. 7. I. Y. Kim and O.de Weck. Multiobjective optimization via the adaptive weighted sum method. In 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, 2004. 8. I. Y. Kim and O.de Weck. Adaptive weighted sum method for multiobjective optimization: a new method for Pareto front generation. Structural and Multidisciplinary Optimization, 31(2):105–116, 2005. 9. A. Messac and C. A. Mattson. Normal constraint method with guarantee of even representation of complete pareto frontier. AIAA Journal, 42(10):2101–2111, 2004. 10. K. Miettinen. Nonlinear Multiobjective Optimization. Kluwer, Boston, 1999. 11. Andrzej Osyczka. Evolutionary Algorithms for Single and Multicriteria Design Optimization. Physica Verlag, Germany, 2002. ISBN 3-7908-1418-0. 12. Eric Sandgren. Multicriteria design optimization by goal programming. In Hojjat Adeli, editor, Advances in Design Optimization, chapter 23, pages 225–265. Chapman & Hall, London, 1994. 13. R. B. Stanikov and J. B. Matusov. Multicriteria Optimization and Engineering. New York: Chapman and Hall, 1995.

An Improved Laplacian Smoothing Approach for Surface Meshes Ligang Chen, Yao Zheng, Jianjun Chen, and Yi Liang College of Computer Science, and Center for Engineering and Scientific Computation, Zhejiang University, Hangzhou, Zhejiang, 310027, P.R. China {ligangchen,yao.zheng,chenjj,yliang}@zju.edu.cn

Abstract. This paper presents an improved Laplacian smoothing approach (ILSA) to optimize surface meshes while maintaining the essential characteristics of the discrete surfaces. The approach first detects feature nodes of a mesh using a simple method, and then moves its adjustable or free node to a new position, which is found by first computing an optimal displacement of the node and then projecting it back to the original discrete surface. The optimal displacement is initially computed by the ILSA, and then adjusted iteratively by solving a constrained optimization problem with a quadratic penalty approach in order to avoid inverted elements. Several examples are presented to illustrate its capability of improving the quality of triangular surface meshes. Keywords: Laplacian smoothing, surface mesh optimization, quadratic penalty approach.

1 Introduction Surface triangulations are used in a wide range of applications (e.g. computer graphics, numerical simulations, etc.). For finite element methods, the quality of surface meshes is of paramount importance, because it influences greatly the ability of mesh generation algorithms for generating qualified solid meshes. Since surface meshes define external and internal boundaries of computational domains where boundary conditions are imposed, and thus they also influence the accuracy of numerical simulations. Mesh modification and vertex repositioning are two main methods for optimizing surface meshes [1, 2]. While mesh modification methods change the topology of the mesh, the vertex repositioning, also termed as mesh smoothing, redistributes the vertices without changing its connectivity. This paper only focuses on smoothing techniques for surface mesh quality improvement. Despite their popularity in optimizing 2D and 3D meshes [3, 4], smoothing methods for surface meshes present significant challenges due to additional geometric constraints, e.g. minimizing changes in the discrete surface characteristics such as discrete normals and curvature. When improving surface mesh quality by vertex repositioning, changes in the surface properties can usually maintained small by keeping the vertex movements small and by constraining the vertices to a smooth Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 318–325, 2007. © Springer-Verlag Berlin Heidelberg 2007

An Improved Laplacian Smoothing Approach for Surface Meshes

319

surface underlying the mesh or to the original discrete surface. One approach commonly used to constrain nodes to the underlying smooth surface is to reposition each vertex in a locally derived tangent plane and then to pull the vertex back to the smooth surface [1, 5]. Another one is to reposition them in a 2D parameterization of the surface and then to map them back to the physical space [6, 7]. In this paper, an improved Laplacian smoothing approach (ILSA) is presented to enhance the quality of a surface mesh without sacrificing its essential surface characteristics. The enhancement is achieved through an iterative process in which each adjustable or free node of the mesh is moved to a new position that is on the adjacent elements of the node. This new position is found by first computing an optimal displacement of the node and then projecting it back to the original discrete surface. The optimal displacement is initially obtained by the ILSA, and then adjusted iteratively by solving a constrained optimization problem with a quadratic penalty approach in order to avoid inverted elements.

2 Outline of the Smoothing Procedure The notations used in the paper are as follows. Let T = (V , E , F ) be a surface mesh, where V denotes the set of vertices of the mesh, E the set of edges and F the set of triangular faces. f i , ei and v i represents the i ' th face, edge and vertex of the mesh

respectively. A(b) denotes the set of all entities of type A connected to or contained in entity b , e.g., V ( f i ) is the set of vertices of face f i and F ( v i ) is the set of faces connected to vertex v i , which is also regarded as the local mesh at v i determined by these faces. We also use | S | to denote the number of elements of a set S . The procedure begins with a simple method to classify the vertices of the mesh into four types: boundary node, corner node, ridge node and smooth node. The first two types of vertices are fixed during the smoothing process for feature preservation and the last two are referred to as adjustable nodes. More sophisticated algorithms for detecting salient features such as crest lines on discrete surfaces can be adopted [8]. Then in an iterative manner, a small optimal displacement is computed for each adjustable node using the ILSA, which accounts for some geometric factors. Moreover, for each smooth node its optimal displacement is adjusted by solving a constrained optimization problem so as to avoid inverted elements. Finally, all those redistributed vertices are projected back to the original discrete surface. The complete procedure is outlined as Algo. 1, of which the algorithmic parameters will be explained later.

3 Classifying the Vertices The boundary nodes, if they exist, can be identified by examining the boundary edges that have only one adjacent element. For each interior node v i , let m =| F ( v i ) | , we first evaluate its discrete normal by solving the following linear equations

320

L. Chen et al.

Ax = 1 .

(1)

where A is an m × 3 matrix whose rows are the unit normals of F ( v i ) , and

1 = (1,1,...,1)t is a vector of length m . Since A may be over- or under-determined, the solution is in least squares sense and we solve it by the singular value decomposition (SVD) method [9]. Algo. 1. The smoothing procedure. Set the algorithmic parameters: max_global_iter_num , max_smooth_iter_num , relax1 , relax2 , μ , wl , wa ; Classify the vertices of the mesh into 4 types; Initialize the smoothed mesh Tnew as the original mesh Tori ; for step := 1 to max_global_iter_num do

//global iteration

Compute the normal of each vertex of Tnew ; Compute the optimal displacement of each ridge node of Tnew ; Compute the initial displacement of each smooth node of Tnew ; Adjust the displacement of each smooth node in order to avoid inverted elements; Project the redistributed position of each adjustable node back ' to the original mesh Tori , denote this new mesh as Tnew ; ' Update Tnew as Tnew

end for Set the final optimized mesh as Tnew .

The length of the resulting vertex normal has a geometric interpretation as an indicator of singularity of v i . Let f j ∈ F ( v i ) , 1 ≤ j ≤ | F ( v i ) | , N( f j ) the unit normal of f j and N ( v i ) = x the unknown vertex normal. The equation corresponding to f j in Eq. (1) is N ( f j )gx =| x | cos ∠(N ( f j ), x) = 1 .

(2)

Now it is obvious that, for some solution x of (1), the angles between x and N ( f j ) , 1 ≤ j ≤ | F ( v i ) | , would be approximately equal. Roughly speaking, if the local mesh F ( v i ) is flat, the angles would be small, otherwise they would be large, consequently the length of the resulting vertex normal would be short and long, and the vertex will be regarded as a smooth node and a ridge node, respectively. The ridge nodes will be further examined to determine whether they are corner nodes or not. Let ei be an edge formed by two ridge nodes, if the bilateral angle between two faces attached to ei is below a threshold angle ( 8π / 9 in our algorithm), these two nodes are said to be attached-sharp nodes of each other. If the number of such nodes of a ridge node is not equal to two, this node is identified as a corner node. The geometric interpretation of the classification is self-evident.

An Improved Laplacian Smoothing Approach for Surface Meshes

321

4 Repositioning the Adjustable Vertices 4.1 Computing Displacements by the ILSA

In each global iteration of Algo. 1, the ILSA is employed to compute the initial optimal displacements for both ridge and smooth nodes. The procedure for treating these two types of nodes is similar. The major difference lies in that smooth nodes take all their adjacent nodes’ effect into account, while ridge nodes consider only the effect of their two attached-sharp nodes. Algo. 2 illustrates the details. Here if v i is a ridge node, then n = 2 and {v j , j = 1, 2} are two attached-sharp nodes of v i , otherwise n = | V ( v i ) | and v j ∈V ( v i ) . d( v j ) is the current displacement of v j . Such treatment of ridge nodes tries to prevent the crest lines on surface meshes from disappearing. The adjusting vector vec takes the lengths of adjacent edges into consideration in order to obtain a smoother result. 4.2 Adjusting Displacements by a Quadratic Penalty Approach

Unfortunately, Laplacian smoothing for 2D mesh may produce invalid elements. When used for surface meshes, there are still possibilities of forming abnormal elements. To compensate for this, we adjust the displacement iteratively for each smooth node by solving a constrained optimization problem. The idea originates in the minimal surface theory in differential geometry. Minimal surfaces are of zero mean curvature. Their physical interpretation is that surface tension tries to make the surface as “taut” as possible. That is, the surface should have the least surface area among all surfaces satisfying certain constraints like having fixed boundaries [10]. Every soap film is a physical model of a minimal surface. This motivates us, for the local mesh F ( v i ) at a smooth node v i , to move v i to minimize the overall area of the elements of F ( v i ) in order to make this local mesh “taut” and thus to avoid invalid elements as much as possible. This new position v 'i is also softly constrained to be on a plane by a quadratic penalty approach. Let d cur ( v i ) and N ( v i ) be the current displacement and the discrete normal of v i respectively. Initially d cur ( v i ) is the result from Algo. 2. Let x be the new pending position of v i and d new ( v i ) = x − v i the new adjusting displacement. Suppose v j ∈ V ( v i ) , 1 ≤ j ≤ n + 1, n =| V ( v i ) | are the vertices surrounding v i in circular

sequence and v n+1 = v 1 . s ( v 1 , v 2 , v 3 ) represents the area of the triangle Δv1 v 2 v 3 . Now the optimization problem can be formulated as follows min g (x) subject to c(x) = 0 x

(3)

where g (x) = wl

n 1 n 2 x − v + wa βij s 2 (x, v j , v j+1 ) | | ∑ ∑ j n j =1 j =1

(4)

322

L. Chen et al.

Algo. 2. ILSA for adjustable nodes. Initialize the vertex displacement d ( v i ) of each adjustable node v i

of T n e w

coordinate of v i : d ( v i ) :=

1 n

as the Laplacian n



j =1

vj − vi ;

for k : = 1 to m a x _ sm o o th _ ite r_ n u m

do

for each adjustable node v i

do

Compute a vector v e c : v ec := where S =

n

∑α j =1

between v i

ij

and

1

α ij

1 S

n

∑α j =1

ij

d(v j) ,

= d ist ( v i , v j ) is the distance

and v j ;

Update d ( v i ) : d ( v i ) := (1 − r ela x1 ) ⋅ d ( v i ) + r ela x1 ⋅ v e c ; end for if then

// e.g. smooth enough

break the iteration; end if end for

and

⎧⎪ N( v i )gd new (v i ) if d cur ( v i ) = 0 c ( x) = ⎨ . 2 if d cur ( v i ) ≠ 0 ⎪⎩d cur ( v i )gd new (v i ) − | d cur ( v i ) |

(5)

Here wl and wa are two algorithmic parameters and βij = 1 s ( v i , v j , v j+1 ) . It can be observed that the constraint c(x) = 0 is used to penalize the deviation of x from a plane. When d cur ( v i ) = 0 , it is the tangent plane at v i , otherwise it is the plane vertical to d cur ( v i ) and passing through the node v i + d cur ( v i ) . In other words, it tries to constrain x to be on the current smoothed discrete surface. It is also observed from Eq. (4) that we include another term related to the length | x − v j | and we use the square of area instead of area itself for simplicity. The area of a triangle can be calculated by a cross product s ( v 1 , v 2 , v 3 ) = | ( v 2 − v1 ) × ( v 3 − v1 ) | 2 . The quadratic penalty function Q(x; μ ) for problem (3) is

Q(x; μ ) = g (x) +

1

μ

c 2 ( x)

(6)

An Improved Laplacian Smoothing Approach for Surface Meshes

323

Algo. 3. Adjusting vertex displacement by a quadratic penalty approach. for k : =1 to max_smooth_iter_num

do

for each smooth node v i do Compute the adjusting displacement dnew ( vi ) by solving problem(3) Update the displacement: dcur ( v i ) := relax2 ⋅ d new ( v i ) + (1 − relax2) ⋅ dcur ( v i ) if //e.g. tiny change of vertex displacements break the iteration; end if end for end for

where μ > 0 is the penalty parameter. Since Q(x; μ ) is a quadratic function, its minimization can be obtained by solving a linear system, for which we again use the SVD method. This procedure of adjusting vertex displacement is given in Algo. 3. 4.3 Projecting the New Position Back to the Original Mesh Once the final displacement of each adjustable node is available, the next step is to project the new position of the node back to the original discrete surface to form an optimized mesh. It is assumed that the displacement is so small that the new position of a node is near its original position. Thus, the projection can be confined to be on the two attached-ridge edges and the adjacent elements of the original node for ridge and smooth nodes, respectively.

5 Experimental Results Two examples are presented to show the capability of our method. The aspect ratio is used to measure the quality of the elements. The first example is a surface mesh defined on a single NURBS patch. The minimal and average aspect ratios of the original (resp. optimized) mesh are 0.09 and 0.81 (resp. 0.39 and 0.89). The second example which is obtained from the Large Geometric Model Archives at Georgia Institute of Technology, is a famous scanned object named horse. The original mesh has 96966 triangles and 48485 nodes, and its average aspect ratio is 0.71, which has increased to 0.82 for the optimized counterpart. Note that the poor quality of the original mesh in several parts of the neck of the horse in Fig. 1(a) whose optimized result is given in Fig. 1(b). The details of horses’ ears of both initial and optimized meshes have also shown that our surface smoothing procedure is capable of preserving sharp features.

324

L. Chen et al.

(a)

(b) Fig. 1. Details of the neck of the horse for the initial (a) and optimized (b) meshes

6 Conclusion and Future Work We have proposed an improved Laplacian smoothing approach for optimizing surface meshes. The nodes of an optimized mesh are kept on the initial mesh to avoid the shrinkage problem. A simple but effective procedure is also suggested to identify the feature points of a mesh in order to preserve its essential characteristics. Furthermore,

An Improved Laplacian Smoothing Approach for Surface Meshes

325

to avoid the formation of inverted elements, we adjust the initial displacements by solving a constrained optimization problem with a quadratic penalty method. In the future, this smoothing technique will be integrated into a surface remesher. A global and non-iterative Laplacian smoothing approach with feature preservation for surface meshes is also under investigation. Acknowledgements. The authors would like to acknowledge the financial support received from the NSFC (National Natural Science Foundation of China) for the National Science Fund for Distinguished Young Scholars under grant Number 60225009, and the Major Research Plan under grant Number 90405003. The first author is very grateful to Mr. Bangti Jin of The Chinese University of Hang Kong for his valuable revision of this paper.

References 1. Frey, P.J., Borouchaki, H.: Geometric surface mesh optimization. Computing and Visualization in Science, 1(3) (1998) 113-121 2. Brewer, M., Freitag, L.A., Patrick M.K., Leurent, T., Melander, D.: The mesquite mesh quality improvement toolkit. In: Proc. of the 12th International Meshing Roundtable, Sandia National Laboratories, Albuquerque, NM, (2003) 239-250 3. Freitag, L.A., Knupp, P.M: Tetrahedral mesh improvement via optimization of the element condition number. International Journal of Numerical Methods in Engineering, 53 (2002) 1377-1391 4. Freitag, L.A., Plassmann, P.: Local optimization-based simplicial mesh untangling and improvement. International Journal of Numerical Methods in Engineering, 49 (2000) 109-125 5. Knupp, P. M.: Achieving finite element mesh quality via optimization of the jacobian matrix norm and associated quantities. Part 1 – a framework for surface mesh optimization. International Journal of Numerical Methods in Engineering, 48 (2000) 401-420 6. Garimella, R.V., Shashkov, M.J., Knupp, P.M.: Triangular and quadrilateral surface mesh quality optimization using local parametrization. Computer Methods in Applied Mechanics and Engineering, 193(9-11) (2004) 913-928 7. Escobar, J.M., Montero, G., Montenegro, R., Rodriguez, E.: An algebraic method for smoothing surface triangulations on a local parametric space. International Journal of Numerical Methods in Engineering, 66 (2006) 740-760. 8. Yoshizawa, S., Belyaev, A., Seidel, H.–P.: Fast and robust detection of crest lines on meshes. In: Proc. of the ACM symposium on Solid and physical modeling, MIT (2005) 227-232 9. William H.P., Saul A.T., William T.V., Brain P.F.: Numerical Recipes in C++. 2nd edn. Cambridge University Press, (2002) 10. Oprea, J.: Differential Geometry and Its Applications. 2nd edn. China Machine Press, (2005)

Red-Black Half-Sweep Iterative Method Using Triangle Finite Element Approximation for 2D Poisson Equations J. Sulaiman1 , M. Othman2 , and M.K. Hasan3 School of Science and Technology, Universiti Malaysia Sabah, Locked Bag 2073, 88999 Kota Kinabalu, Sabah, Malaysia 2 Faculty of Computer Science and Info. Tech., Universiti Putra Malaysia, 43400 Serdang, Selangor D.E. 3 Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor D.E. [email protected] 1

Abstract. This paper investigates the application of the Red-Black Half-Sweep Gauss-Seidel (HSGS-RB) method by using the half-sweep triangle finite element approximation equation based on the Galerkin scheme to solve two-dimensional Poisson equations. Formulations of the full-sweep and half-sweep triangle finite element approaches in using this scheme are also derived. Some numerical experiments are conducted to show that the HSGS-RB method is superior to the Full-Sweep method. Keywords: Half-sweep Iteration, Red-Black Ordering, Galerkin Scheme, Triangle Element.

1

Introduction

By using the finite element method, many weighted residual schemes can be used by researchers to gain approximate solutions such as the subdomain, collocation, least-square, moments and Galerkin (Fletcher [4,5]). In this paper, by using the first order triangle finite element approximation equation based on the Galerkin scheme, we apply the Half-Sweep Gauss-Seidel (HSGS) method with the RedBlack ordering strategy for solving the two-dimensional Poisson equation. To show the efficiency of the HSGS-RB method, let us consider the twodimensional Poisson equation defined as ∂2U ∂2U = f (x, y), (x, y) ∈ D = [a, b] × [a, b] 2 + ∂x ∂y2 subject to the Dirichlet boundary conditions U (x, a) = g1 (x), U (x, b) = g2 (x), U (a, y) = g3 (y), U (b, y) = g4 (y),

a≤x≤b a≤x≤b a≤y≤b a≤y≤b

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 326–333, 2007. c Springer-Verlag Berlin Heidelberg 2007 

(1)

Red-Black Half-Sweep Iterative Method

a)

327

b)

Fig. 1. a) and b) show the distribution of uniform node points for the full- and halfsweep cases respectively at n = 7

To facilitate in formulating the full-sweep and half-sweep linear finite element approximation equations for problem (1), we shall restrict our discussion onto uniform node points only as shown in Figure 1. Based on the figure, it has been shown that the solution domain, D is discretized uniformly in both x and y directions with a mesh size, h which is defined as h=

b−a , m=n+1 m

(2)

Based on Figure 1, we need to build the networks of triangle finite elements in order to facilitate us to derive triangle finite element approximation equations for problem (1). By using the same concept of the half-sweep iterative applied to the finite difference method (Abdullah [1], Sulaiman et al. [13], Othman & Abdullah [8]), each triangle element will involves three node points only of type • as shown in Figure 2. Therefore, the implementation of the full-sweep and halfsweep iterative algorithms will be applied onto the node points of the same type until the iterative convergence test is met. Then other approximate solutions at remaining points (points of the different type) are computed directly (Abdullah [1], Abdullah & Ali [2], Ibrahim & Abdullah [6], Sulaiman et al. [13,14], Yousif & Evans [17]).

2

Formulation of the Half-Sweep Finite Element Approximation

As mentioned in the previous section, we study the application of the HSGS-RB method by using the half-sweep linear finite element approximation equation based on the Galerkin scheme to solve two-dimensional Poisson equations. By considering three node points of type • only, the general approximation of the

328

J. Sulaiman, M. Othman, and M.K. Hasan

a)

b)

Fig. 2. a) and b) show the networks of triangle elements for the full- and half-sweep cases respectively at n = 7

function, U (x, y) in the form of interpolation function for an arbitrary triangle element, e is given by (Fletcher [4], Lewis & Ward [7], Zienkiewicz [19])  [e] (x, y) = N1 (x, y)U1 + N2 (x, y)U2 + N3 (x, y)U3 U

(3)

and the shape functions, Nk (x, y), k = 1, 2, 3 can generally be stated as Nk (x, y) =

1 (ak + bk x + ck y), k = 1, 2, 3 det A

(4)

where, det A = x1 (y2 − y3 ) + x2 (y3 − y1 ) + x3 (y1 − y2 ), ⎡

⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ a1 x2 y3 − x3 y2 a1 a1 a1 a1 ⎣ a2 ⎦ = ⎣ x3 y1 − x1 y3 ⎦ , ⎣ a2 ⎦ = ⎣ a2 ⎦ , ⎣ a2 ⎦ = ⎣ a2 ⎦ , a3 x1 y2 − x2 y1 a3 a3 a3 a3 Beside this, the first order partial derivatives of the shape functions towards x and y are given respectively as bk  ∂ ∂x (Nk (x, y)) = det A k = 1, 2, 3 (5) c ∂ k ∂y (Nk (x, y)) = det A Again based on the distribution of the hat function, Rr,s (x, y) in the solution domain, the approximation of the functions, U (x, y) and f (x, y) in case of the full-sweep and half-sweep cases for the entire domain will be defined respectively as (Vichnevetsky [16])  (x, y) = U

m  m 

Rr,s (x, y)Ur,s

(6)

Rr,s (x, y)fr,s

(7)

r=0 s=0

f(x, y) =

m  m  r=0 s=0

and  y) = U(x,

m 

m 

r=0,2,4 s=0,2,4

Rr,s (x, y)Ur,s +

m−1 

m−1 

r=1,2,5 s=1,3,5

Rr,s (x, y)Ur,s

(8)

Red-Black Half-Sweep Iterative Method m 

f(x, y) =

m 

Rr,s (x, y)fr,s +

r=0,2,4 s=0,2,4

m−1 

m−1 

Rr,s (x, y)fr,s

329

(9)

r=1,3,5 s=1,3,5

Thus, Eqs. (6) and (8) are approximate solutions for problem (1). To construct the full-sweep and half-sweep linear finite element approximation equations for problem (1), this paper proposes the Galerkin finite element scheme. Thus, let consider the Galerkin residual method (Fletcher [4,5], Lewis & Ward [7]) be defined as Ri,j (x, y)E(x, y) dxdy = 0, i, j = 0, 1, 2, ..., m (10) D 2

2

where, E(x, y) = ∂∂xU2 + ∂∂yU2 − f (x, y) is a residual function. By applying the Green theorem, Eq. 10 can be shown in the following form

∂U ∂U −R (x, y) dx + R (x, y) dy i,j i,j ∂y ∂x λ  b b (11) ∂Ri,j (x, y) ∂U ∂Ri,j (x, y) ∂U + dxdy = Fi,j − ∂x ∂x ∂y ∂y a a where,



b



b

Fi,j =

Ri,j (x, y)f (x, y) dxdy a

a

By applying Eq. (5) and substituting the boundary conditions into problem (1), it can be shown that Eq. (11) will generate a linear system for both cases. Generally both linear systems can be stated as −



∗ Ki,j,r,s Ur,s =



∗ Ci,j,r,s fr,s

(12)

where, ∗ Ki,j,r,s



b



= a

a

b

∂Ri,j ∂Rr,s ∂x ∂x

∗ Ci,j,r,s



b



=





b



dxdy + a

a

b

∂Ri,j ∂Rr,s ∂y ∂y

 dxdy

b

(Ri,j (x, y)Rr,s (x, y)) dxdy a

a

Practically, the linear system in Eq. (12) for the full-sweep and half-sweep cases will be easily rewritten in the stencil form respectively as follows: 1. Full-sweep stencil ( Zienkiewicz [19], Twizell [15], Fletcher [5]) ⎡

⎤ ⎡ ⎤ 01 0 011 2 h ⎣ 1 −4 1 ⎦ Ui,j = ⎣ 1 6 1 ⎦ fi,j 12 01 0 110

(13)

330

J. Sulaiman, M. Othman, and M.K. Hasan

2. Half-sweep stencil ⎡ ⎤ ⎡ ⎤ 1010 10 10 2 ⎣ 0 −4 0 0 ⎦ Ui,j = h ⎣ 0 5 0 1 ⎦ fi,j , i = 1 6 1010 10 10

(14)



⎡ ⎤ ⎤ 010 10 01010 2 h ⎣ 0 0 −4 0 0 ⎦ Ui,j = ⎣ 1 0 6 0 1 ⎦ fi,j , i = 1, n 6 010 10 01010 ⎡ ⎡ ⎤ ⎤ 010 1 0101 2 ⎣ 0 0 −4 0 ⎦ Ui,j = h ⎣ 1 0 5 0 ⎦ fi,j , i = n 6 010 1 0101

(15)

(16)

The stencil forms in Eqs. (13) till (16), which are based on the first order triangle finite element approximation equation, can be used to represent as the full-sweep and half-sweep computational molecules. Actually, the computational molecules involve seven node points in formulating their approximation equations. However, two of its coefficients are zero. Apart of this, the form of the computational molecules for both triangle finite element schemes is the same compared to the existing five points finite difference scheme, see Abdullah [1], Abdullah and Ali [2], Yousif and Evans [17].

3

Implementation of the HSGS-RB

According to previous studies on the implementation of various orderings, it is obvious that combination of iterative schemes and ordering strategies which have been proven can accelerate the convergence rate, see Parter [12], Evans and Yousif [3], Zhang [18]. In this section, however, there are two ordering strategies considered in this paper such as the lexicography (NA) and red-black (RB) being applied to the HSGS iterative methods, called as HSGS-NA and HSGSRB methods respectively. In comparison, the Full-Sweep Gauss-Seidel (FSGS) method with NA ordering, namely FSGS-NA, acts as the control of comparison of numerical results. It can be seen from Figure 3 by using the half-sweep triangle finite element approximation equations in Eqs. (14) till (16), the position of numbers in the solution domain for n = 7 shows on how both HSGS-NA and HSGS-RB methods will be performed by starting at number 1 and ending at the last number.

4

Numerical Experiments

To study the efficiency of the HSGS-RB scheme by using the half-sweep linear finite element approximation equation in Eqs. [14] till [16] based on the Galerkin scheme, three items will be considered in comparison such as the number of

Red-Black Half-Sweep Iterative Method

a)

331

b)

Fig. 3. a) and b) show the NA and RB ordering strategies for the half-sweep case at n=7 Table 1. Comparison of number of iterations, execution time (in seconds) and maximum errors for the iterative methods Number of iterations Mesh size Methods 32 64 128 256 FSGS-NA 1986 7368 27164 99433 HSGS-NA 1031 3829 14159 52020 HSGS-RB 1027 3825 14152 52008 Execution time (seconds) Mesh size Methods 32 64 128 256 FSGS-NA 0.14 2.08 30.51 498.89 HSGS-NA 0.03 0.63 9.08 218.74 HSGS-RB 0.03 0.56 8.19 215.70 Maximum absolute errors Mesh size Methods 32 64 128 256 FSGS-NA 1.4770e-4 3.6970e-5 9.3750e-6 2.8971e-6 HSGS-NA 5.7443e-4 1.6312e-4 4.4746e-5 1.1932e-5 HSGS-RB 5.7443e-4 1.6312e-4 4.4746e-5 1.1932e-5

iterations, execution time and maximum absolute error. Some numerical experiments were conducted in solving the following 2D Poisson equation (Abdullah [1])   2 ∂2U ∂2U 2 exp(xy), (x, y) ∈ D = [a, b] × [a, b] (17) 2 + 2 = x +y ∂x ∂y

332

J. Sulaiman, M. Othman, and M.K. Hasan

Then boundary conditions and the exact solution of the problem (17) are defined by U (x, y) = exp(xy), (x, y) = [a, b] × [a, b] (18) All results of numerical experiments, obtained from implementation of the FSGS-NA, HSGS-NA and HSGS-RB methods, have been recorded in Table 1. In the implementation mentioned above, the convergence criteria considered the tolerance error,  = 10−10 .

5

Conclusion

In the previous section, it has shown that the full-sweep and half-sweep triangle finite element approximation equations based on the Galerkin scheme can be easily represented in Eqs. (13) till (16). Through numerical results collected in Table 1, the findings show that number of iterations have declined approximately 47.70 − 48.29% and 47.68 − 48.09% correspond to the HSGS-RB and HSGS-NA methods compared to FSGS-NA method. In fact, the execution time versus mesh size for both HSGS-RB and HSGS-NA methods are much faster approximately 56.76 − 78.57% and 56.15 − 78.57% respectively than the FSGS-NA method. Thus, we conclude that the HSGS-RB method is slightly better than the HSGSNA method. In comparison between the FSGS and HSGS methods, it is very obvious that the HSGS method for both ordering strategies is far better than the FSGS-NA method in terms of number of iterations and the execution time. This is because the computational complexity of the HSGS method is nearly 50% of the FSGS-NA method. Again, approximate solutions for the HSGS method are in good agreement compared to the FSGS-NA method. For our future works, we shall investigate on the use of the HSGS-RB as a smoother for the halfsweep multigrid (Othman & Abdullah [8,9]) and the development and implementation of the Modified Explicit Group (MEG) (Othman & Abdullah [10], Othman et al. [11])and the Quarter-Sweep Iterative Alternating Decomposition Explicit (QSIADE) (Sulaiman et al. [14]) methods by using finite element approximation equations.

References 1. Abdullah, A.R.: The Four Point Explicit Decoupled Group (EDG) Method: A Fast Poisson Solver, Intern. Journal of Computer Mathematics, 38(1991) 61-70. 2. Abdullah, A.R., Ali, N.H.M.: A comparative study of parallel strategies for the solution of elliptic pde’s, Parallel Algorithms and Applications, 10(1996) 93-103. 3. Evan, D.J., Yousif, W.F.: The Explicit Block Relaxation method as a grid smoother in the Multigrid V-cycle scheme, Intern. Journal of Computer Mathematics, 34(1990) 71-78. 4. Fletcher, C.A.J.: The Galerkin method: An introduction. In. Noye, J. (pnyt.). Numerical Simulation of Fluid Motion, North-Holland Publishing Company,Amsterdam (1978) 113-170.

Red-Black Half-Sweep Iterative Method

333

5. Fletcher, C.A.J.: Computational Galerkin method. Springer Series in Computational Physics. Springer-Verlag, New York (1984). 6. Ibrahim, A., Abdullah, A.R.: Solving the two-dimensional diffusion equation by the four point explicit decoupled group (EDG) iterative method. Intern. Journal of Computer Mathematics, 58(1995) 253-256. 7. Lewis, P.E., Ward, J.P.: The Finite Element Method: Principles and Applications. Addison-Wesley Publishing Company, Wokingham (1991) 8. Othman, M., Abdullah, A.R.: The Halfsweeps Multigrid Method As A Fast Multigrid Poisson Solver. Intern. Journal of Computer Mathematics, 69(1998) 219-229. 9. Othman, M., Abdullah, A.R.: An Effcient Multigrid Poisson Solver. Intern. Journal of Computer Mathematics, 71(1999) 541-553. 10. Othman, M., Abdullah, A.R.: An Efficient Four Points Modified Explicit Group Poisson Solver, Intern. Journal of Computer Mathematics, 76(2000) 203-217. 11. Othman, M., Abdullah, A.R., Evans, D.J.: A Parallel Four Point Modified Explicit Group Iterative Algorithm on Shared Memory Multiprocessors, Parallel Algorithms and Applications, 19(1)(2004) 1-9 (On January 01, 2005 this publication was renamed International Journal of Parallel, Emergent and Distributed Systems). 12. Parter, S.V.: Estimates for Multigrid methods based on Red Black Gauss-Seidel smoothers, Numerical Mathematics, 52(1998) 701-723. 13. Sulaiman. J., Hasan, M.K., Othman, M.: The Half-Sweep Iterative Alternating Decomposition Explicit (HSIADE) method for diffusion equations. LNCS 3314, Springer-Verlag, Berlin (2004)57-63. 14. Sulaiman, J., Othman, M., Hasan, M.K.: Quarter-Sweep Iterative Alternating Decomposition Explicit algorithm applied to diffusion equations. Intern. Journal of Computer Mathematics, 81(2004) 1559-1565. 15. Twizell, E.H.: Computational methods for partial differential equations. Ellis Horwood Limited, Chichester (1984). 16. Vichnevetsky, R.: Computer Methods for Partial Differential Equations, Vol I. New Jersey: Prentice-Hall (1981) 17. Yousif, W.S., Evans, D.J.: Explicit De-coupled Group iterative methods and their implementations, Parallel Algorithms and Applications, 7(1995) 53-71. 18. Zhang, J.: Acceleration of Five Points Red Black Gauss-Seidel in Multigrid for Poisson Equations, Applied Mathematics and Computation, 80(1)(1996) 71-78. 19. Zienkiewicz, O.C.: Why finite elements?. In. Gallagher, R.H., Oden, J.T., Taylor, C., Zienkiewicz, O.C. (Eds). Finite Elements In Fluids-Volume, John Wiley & Sons,London 1(1975) 1-23

Optimizing Surface Triangulation Via Near Isometry with Reference Meshes Xiangmin Jiao, Narasimha R. Bayyana, and Hongyuan Zha College of Computing Georgia Institute of Technology, Atlanta, GA 30332, USA {jiao,raob,zha}@cc.gatech.edu

Abstract. Optimization of the mesh quality of surface triangulation is critical for advanced numerical simulations and is challenging under the constraints of error minimization and density control. We derive a new method for optimizing surface triangulation by minimizing its discrepancy from a virtual reference mesh. Our method is as easy to implement as Laplacian smoothing, and owing to its variational formulation it delivers results as competitive as the optimization-based methods. In addition, our method minimizes geometric errors when redistributing the vertices using a principle component analysis without requiring a CAD model or an explicit high-order reconstruction of the surface. Experimental results demonstrate the effectiveness of our method. Keywords: mesh optimization, numerical simulations, surface meshes, nearly isometric mapping.

1

Introduction

Improving surface mesh quality is important for many advanced 3-D numerical simulations. An example application is the moving boundary problems where the surfaces evolve over time and must be adapted for better numerical stability, accuracy, and efficiency while preserving the geometry. Frequently, the geometry of the evolving surface is unknown a priori but is part of the numerical solution, so the surface is only given by a triangulation without the availability of a CAD model. The quality of the mesh can be improved by mesh adaptation using edge flipping, edge collapsing, or edge splitting (see e.g. [1,2,3,4,5]), but it is often desirable to fix the connectivity and only redistribute the vertices, such as in the arbitrary Lagrangian-Eulerian methods [6]. In this paper, we focus on mesh optimization (a.k.a. mesh smoothing) with fixed connectivity. Mesh smoothing has a vast amount of literature (for example, see [7,8,9,10,11]). Laplacian smoothing is often used in practice for its simplicity, although it is not very effective for irregular meshes. The more sophisticated methods are often optimization based. An example is the angle-based method of Zhou and Shimada [11]. Another notable example is the method of Garimella et al. [7], which minimizes the condition numbers of the Jacobian of the triangles against some reference Jacobian matrices (RJM). More recently, the finite-element-based method is used in [8], but Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 334–341, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Optimizing Surface Triangulation

335

their method is relatively difficult to implement. Note that some mesh smoothing methods (such as the angle-based method) are designed for two-dimensional meshes, and the conventional wisdom is to first parameterize the surface locally or globally and then optimize the flattened mesh [2,12,13,14]. To preserve the geometry, these methods typically require a smooth or discrete CAD model and associated point location procedures to project the points onto the surface, which increase the implementation complexity. The goal of this paper is to develop a mesh smoothing method that is as simple as Laplacian smoothing while being as effective as the sophisticated optimization-based methods without parameterizing the mesh. The novelty of our method is to formulate the problem as a near isometic mapping from an ideal reference mesh onto the surface and to derive a simple iterative procedure to solve the problem. Due to its variational nature, the method can balance angle optimization and density control. It also eliminates the needs of the preprocessing step of surface parameterization and the post-processing step of vertex projection, so it is much easier to implement and is well-suited for integration into numerical simulations. The remainder of the paper is organized as follows. In Section 2, we formulate the mesh optimization problem and explain its relationship to isometric mappings. In Section 3, we describe a simple discretization of our method for triangulated surfaces with adaptive step-size control. In Section 4, we present some experimental results. Section 5 concludes the paper with discussions of future research directions.

2

Mesh Optimization and Isometric Mappings

Given a mesh with a set of vertices and triangles, the problem of mesh optimization is to redistribute the vertices so that the shapes of the triangles are improved in terms of angles and sizes. Surface parameterization is the procedure to map the points on one surface onto a parameter domain (such as a plane or sphere) under certain constraints such as preservation of areas or angles [15]. Although surface parameterization has been widely used in computer graphics and visualization [16,17,18,19,20,21], in this section we explore an interesting connection between it and mesh optimization. More formally, given a 2-manifold surface M ⊂ R3 and a parameter domain Ω ⊂ R2 , the problem of surface parameterization is to find a mapping f : Ω → M such that f is one-to-one and onto. Typically, the surface M is a triangulated surface, with a set of vertices P i ∈ R3 and triangles T (j) = (P j1 , P j2 , P j3 ). The problem of isometric parameterization (or mapping) is to find the values of pi such that f (pi ) = P i , the triangles t(j) = (pj1 , pj2 , pj3 ) do not overlap in Ω, and the angles and the areas of the triangles are preserved as much as possible. We observe that the constraints of angle and area preservation in isometric parameterization are similar to angle optimization and density control in mesh optimization, except that the requirement of Ω ⊂ R2 is overly restrictive.

336

X. Jiao, N.R. Bayyana, and H. Zha

Fig. 1. Atomic mapping from triangle on M1 to triangle on M2

However, by replacing Ω with the input surface and replacing M by a “virtual” reference mesh with desired mesh quality and density (for example be composed of equilateral triangles of user-specified sizes, where the triangles need not fit together geometrically and hence we refer to it as a “virtual” mesh), we can then effectively use isometric parameterization as a computational framework for mesh optimization, which we describe as follows. Let M1 denote an ideal virtual reference mesh and M2 the triangulated surface to be optimized. Let us first assume that M2 is a planar surface with a global parameterization ξ, and we will generalize the framework to curved surfaces in the next section. In this context, the problem can be considered as reparameterizing M2 based on the triangles and the metrics of M1 . Consider an atomic linear mapping g ≡ f2 ◦ f1 −1 from a triangle on M1 onto a corresponding triangle on M2 , as shown in Figure 1, where f1 and f2 are both mappings from a reference triangle. Let J 1 and J 2 denote their Jacobian matrices with respect to the reference triangle. The Jacobian matrix of the mapping g is A = J 2 J −1 1 . For g to be nearly isometric, A must be close to be orthogonal. We measure the deviation of A from orthogonality using two energies: angle distortion and area distortion. Angle distortion depends on the condition number of the matrix A in 2-norm, i.e., κ2 (A) = A2 A−1 2 . Area distortion can be measured by det(A) = det(J 2 )/ det(J 1 ), which is equal to 1 if the mapping is area preserving. Let us define a function  p s + s−p if s > 0 τp (s) ≡ (1) ∞ otherwise, where p > 0, so τp is minimized if s = 1 and approaches ∞ if s ≤ 0 or s → ∞. To combine the angle- and area-distortion measures, we use the energy EI (T, μA ) ≡ (1 − μA )τ1 (κ2 (A)) + μA τ 12 (det(A)),

(2)

where μA is between 0 and 1 and indicates the relative importance of area preservations versus angle preservation. For feasible parameterizations, EI is finite because det(A) > 0 and κ2 (A) ≥ 1. To obtain a nearly isometric mapping,

Optimizing Surface Triangulation

337

we must find ξ i = g(xi ) for each vertex xi ∈ M1 to minimize the sum of EI over all triangles on M1 , i.e.,  EI (T, μA ). (3) EI (M1 , μA ) ≡ T ∈M1

We refer to this minimization as nearly isometric parameterization of surfaces (NIPS ), which balances the preservation of angles and areas. This formulation shares similarity with that of Degener et al. for surface parameterization [12], who also considered both area and angle distortions but used a different energy. Note that the special case with μA = 0 would reduce to the most isometric parameterization (MIPS) of Hormann and Griener [18] and is closely related to the condition-number-based optimization in [7]. The direct minimization of EI may seem difficult because of the presence of κ2 (A). However, by using a result from [22] regarding the computation of Dirichlet energy ED (g) = trace(AT A) det(J1 )/4 as well as the fact that τ1 (κ2 (A)) = κF (A) =

4ED (g) trace(AT A) = , det(A) det(J 2 )

(4)

one can show that τ1 (κ2 (A)) =

 1 cot αi li 22 , det(J 2 ) i

(5)

where αi and li ≡ ξ i− − ξ i+ are defined as in Figure 1, and i− and i+ denote the predecessor and successor of i in the triangle. To minimize EI , we must evaluate its gradient with respect to ξ i . For a triangle  × li denote the 90◦ counter-clockwise rotation of the vector li on T , let l⊥ i ≡ n M2 . It can be verified that    (j) (j) ∂EI (M, μA ) (j) (j) (j) (j)⊥ si+ li+ − si− li− + ti li , (6) = ∂ξi (j) i∈T

where s± = (1−μA )

1 det(J 1 ) − det(J 2 ) 2 cot α± κF and ti = (1−μA ) + μA . (7) det(J 2 ) det(J 2 ) 2 det(J 1 ) 12 det(J 2 ) 32

Equation (6) is a simple weighted sum of its incident edges and 90◦ counterclockwise rotation of opposite edges over all its incident triangles T (j) .

3

Mesh Optimization for Curved Surfaces

From Eqs. (6) and (7), it is obvious that ∂EI /∂ξi does not depend on the underlying parameterization ξ i of M2 , so it becomes straightforward to evaluate it directly on a curved surface. In particular, at each vertex i on M1 , let V i ≡ (t1 |t2 )3×2 denote the matrix composed of the unit tangents of M2 at the point. To reduce error, we constrain the point to move within the tangent space of

338

X. Jiao, N.R. Bayyana, and H. Zha

M2 so that the displacements would be V i ui , where ui ∈ R2 . Because of the linearity, from the chain rule we then have    (j) (j) ∂EI (M1 , μA ) (j) (j) (j) (j)⊥ si+ li+ − si− li− + ti li , = V Ti ∂ui (j)

(8)

i∈T

where li ∈ R3 as defined earlier. This equation constrains the search direction within the local tangent space at vertex i without having to project its neighborhood onto a plane. We estimate the tangent space as in [23]. In particular, at each vertex v, suppose v is the origin of a local coordinate frame, and m is the number of the faces incident on v. Let N be an 3 × m matrix whose ith column vector is the unit outward normal to the ith incident face of v, and W be an m × m diagonal matrix with Wii equal to the face area associated with the ith face. Let A denote N W N T , which we refer to as the normal covariance matrix. A is symmetric positive semi-definite with real eigenvalues. We use the vector space spanned by the eigenvectors corresponding to the two smaller eigenvalues of A as the tangent space. If the surface contains ridges or corners, we restrict the tangent space to contain only the eigenvector corresponding to the smallest eigenvalues of A at ridge vertices and make the tangent space empty at corners. To solve the variational problem, one could use a Gauss-Seidel style iteration to move the vertices. This approach was taken in some parameterization and mesh optimization algorithms [3,7]. For simplicity, we use a simple nonlinear Jacobi iteration, which moves vertex i by a displacement of  di ≡ −V i V Ti

i∈T (j)

  (j) (j) (j) (j) (j) (j)⊥ si+ li+ − si− li− + ti li   .  (j) (j) s + s (j) i+ i− i∈T

(9)

This Jacobi-style iteration may converge slower but it can be more efficient than the Gauss-Seidel style iteration, as it eliminates the need of reestimating the tangent spaces at the neighbor vertices of v after moving a vertex v. The concurrent motion of the vertices in the Jacobi-style iterations may lead to mesh folding. To address this problem, we introduce an asynchronous step-size control. For each triangle pi pj pk , we solve for the maximum α ≤ 1 such that (α) (α) (α)

(α)

the triangle pi pj pk does not fold, where pi ≡ pi + αdi [23]. We reduce di at vertex i by a factor equal to the minimum of the αs of its incident faces. After rescaling the displacement of all the vertices, we recompute α and repeat the rescaling process until α = 1 for all vertices.

4

Experimental Results

In this section, we present some preliminary results using our method for static 2-D meshes and dynamic 3-D meshes.

Optimizing Surface Triangulation

339

Table 1. Comparative results of optimizing 2-D meshes. CN-based is our method with μA = 0 and CNEA-based with μA = 1. Symbol ‘-’ indicates mesh folding.

U1 Original 12.0 Laplacian 33.1 Angle-based 32.9 CN-based 36.0 CNEA-based 35.7

(a)

Minimum angle U2 U3 R1 R2 8.8 8.5 0.11 1.1 31.4 30.8 5.8 4.9 30.9 29.5 3.9 36.1 34.6 12.6 8.9 35.3 34.2 12.7 7.8

R3 0.043 4.5 10.3 8.9

U1 129.5 99.4 96.0 96.8 97.0

(b)

Maximum angle U2 U3 R1 R2 156.9 147.5 179.7 176.9 105.9 109.2 168.3 170.8 103.7 105.9 170.3 100.1 105.6 153.2 157.3 101.3 105.8 153.9 163.4

(c)

R3 179.9 171.0 156.2 160.6

(d)

Fig. 2. Sample meshes (a,c) before and (b,d) after mesh optimization

4.1

Optimization of 2-D Meshes

We first compare the effectiveness of our method against the length-weighted Laplacian smoothing and the angle-based method of Zhou and Shimada [11]. Because these existing methods are better established for planar meshes, we perform our comparisons in 2-D. Table 1 shows the minimum and maximum angles of six different meshes before and after mesh optimization, including three relatively uniform meshes (denoted by U1, U2, and U3) and three meshes with random points (denoted by R1, R2, and R3). In our methods, we consider the virtual reference mesh to be composed of equilateral triangles with the average area of the triangles. Figure 2 shows the original and the optimized meshes using our method with μA = 1 for U1 and R1. In nearly all cases, the condition-number based method (i.e., μA = 0) performs substantially better in minimum angles for all cases, comparative or slightly better in maximum angles for uniform meshes, and significantly better in maximum angles for the random meshes. The areaequalizing optimization delivers slightly worse angles than condition-number based optimization, but the former still outperforms edge-weighted Laplacian smoothing and the angle-based methods while allowing better control of areas. 4.2

Optimization of Dynamic Meshes

In this test, we optimize a mesh that is deformed by a velocity field. Our test mesh discretizes a sphere with radius 0.15 centered at (0.5, 0.75, 0.5) and contains 5832 vertices and 11660 triangles. The velocity field is given by

340

X. Jiao, N.R. Bayyana, and H. Zha

Fig. 3. Example of optimized dynamic surfaces. Colors indicate triangle areas.

u(x, y, z) = cos(πt/T ) sin2 (πx)(sin(2πz) − sin(2πy)), y(x, y, z) = cos(πt/T ) sin2 (πy)(sin(2πx) − sin(2πz)), z(x, y, z) = cos(πt/T ) sin2 (πz)(sin(2πy) − sin(2πx)),

(10)

where T = 3, so that the shape is deformed the most at time t = 1.5 and should return to the original shape at time t = 3. We integrate the motion of the interface using the face-offsetting method in [23] while redistributing the vertices using the initial mesh as the reference mesh. In this test the angles and areas of the triangles were well preserved even after very large deformation. Furthermore, the vertex redistribution introduced very small errors and the surface was able to return to a nearly perfect sphere at time t = 3.

5

Conclusion

In this paper, we proposed a new method for optimizing surface meshes using a near isometry with reference meshes. We derived a simple discretization, which is easy to implement and is well suitable for integration into large-scale numerical simulations. We compared our method with some existing methods, showed substantial improvements in the maximum and minimum angles, and demonstrated its effective use for moving meshes. As a future direction, we plan to extend our method to optimize quadrilateral meshes and 3-D volume meshes. Acknowledgments. This work was supported by a subcontract from the Center for Simulation of Advanced Rockets of the University of Illinois at UrbanaChampaign funded by the U.S. Department of Energy through the University of California under subcontract B523819 and in part by the National Science Foundation under award number CCF-0430349.

References 1. Alliez, P., Meyer, M., Desbrun, M.: Interactive geometry remeshing. ACM Trans. Graph. 21 (2002) 347–354 Proc. SIGGRAPH 2002. 2. Frey, P., Borouchaki, H.: Geometric surface mesh optimization. Comput. Visual. Sci. (1998) 113–121

Optimizing Surface Triangulation

341

3. Hormann, K., Labsik, U., Greiner, G.: Remeshing triangulated surfaces with optimal parametrizations. Comput. Aid. D’s. 33 (2001) 779–788 4. Praun, E., Hoppe, H.: Spherical parametrization and remeshing. ACM Trans. Graph. 22 (2003) 340–349 Proc. SIGGRAPH 2003. 5. Surazhsky, V., Gotsman, C.: Explicit surface remeshing. In: Eurographics Symposium on Geometric Processing. (2003) 20–30 6. Donea, J., Huerta, A., Ponthot, J.P., Rodriguez-Ferran, A.: Arbitrary LagrangianEulerian methods. In Stein, E., de Borst, R., Hughes, T.J., eds.: Encyclopedia of Computational Mechanics. Volume 1: Fundamentals. John Wiley (2004) 7. Garimella, R., Shashkov, M., Knupp, P.: Triangular and quadrilateral surface mesh quality optimization using local parametrization. Comput. Meth. Appl. Mech. Engrg. 193 (2004) 913–928 8. Hansen, G.A., Douglass, R.W., Zardecki, A.: Mesh Enhancement. Imperial College Press (2005) 9. Knupp, P., Margolin, L., Shashkov, M.: Reference Jacobian optimization based rezone strategies for arbitrary Lagrangian Eulerian methods. J. Comput. Phys. 176 (2002) 93–128 10. Shashkov, M.J., Knupp, P.M.: Optimization-based reference-matrix rezone strategies for arbitrary lagrangian-Eulerian methods on unstructured grids. In: Proc. the 10th International Meshing Roundtable. (2001) 167–176 11. Zhou, T., Shimada, K.: An angle-based approach to two-dimensional mesh smoothing. In: Proceedings of 9th International Meshing Roundtable. (2000) 373–384 12. Degener, P., Meseth, J., Klein, R.: An adaptable surface parameterization method. In: Proc. the 12th International Meshing Roundtable. (2003) 227–237 13. Knupp, P.: Achieving finite element mesh quality via optimization of the jacobian matrix norm and associated quantities. part i: a framework for surface mesh optimization. Int. J. Numer. Meth. Engrg. 48 (2000) 401–420 14. Sheffer, A., de Sturler, E.: Parameterization of faceted surfaces for meshing using angle-based flattening. Engrg. Comput. 17 (2001) 326–337 15. Floater, M.S., Hormann, K.: Surface parameterization: a tutorial and survey. In: Advances in Multiresolution for Geometric Modelling. Springer Verlag (2005) 157–186 16. Desbrun, M., Meyer, M., Alliez, P.: Intrinsic parameterizations of surface meshes. In: Proc. Eurographics 2002 Conference. (2002) 17. Gotsman, C., Gu, X., Sheffer, A.: Fundamentals of spherical parameterization for 3d meshes. ACM Trans. Graph. 22 (2003) 358–363 Proc. SIGGRAPH 2003. 18. Hormann, K., Greiner, G.: MIPS: An efficient global parametrization method. In Laurent, P.J., Sablonniere, P., Schumaker, L.L., eds.: Curve and Surface Design: Saint-Malo 1999. (2000) 153–162 19. Khodakovsky, A., Litke, N., Schröder, P.: Globally smooth parameterizations with low distortion. ACM Trans. Graph. 22 (2003) 350–357 Proc. SIGGRAPH 2003. 20. Kos, G., Varady, T.: Parameterizing complex triangular meshes. In Lyche, T., Mazure, M.L., Schumaker, L.L., eds.: Curve and Surface Design: Saint-Malo 2002, Modern Methods in Applied Mathematics. (2003) 265–274 21. Sheffer, A., Gotsman, C., Dyn, N.: Robust spherical parametrization of triangular meshes. In: Proc. the 4th Israel-Korea Bi-National Conference on Geometric Modeling and Computer Graphics. (2003) 94–99 22. Pinkall, U., Polthier, K.: Computing discrete minimal surfaces and their conjugates. Exper. Math. 2 (1993) 15–36 23. Jiao, X.: Face offsetting: a unified framework for explicit moving interfaces. J. Comput. Phys. 220 (2007) 612–625

Efficient Adaptive Strategy for Solving Inverse Problems M. Paszy´ nski1, B. Barabasz2, and R. Schaefer1 Department of Computer Science Department of Modeling and Information Technology AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Cracow, Poland, {paszynsk,schaefer}@agh.edu.pl, [email protected] http://home.agh.edu.pl/∼ paszynsk 1

2

Abstract. The paper describes the strategy for efficient solving of difficult inverse problems, utilizing Finite Element Method (FEM) as a direct problem solver. The strategy consists of finding an optimal balance between the accuracy of global optimization method and the accuracy of an hp-adaptive FEM used for the multiple solving of the direct problem. The crucial relation among errors was found for the objective function being the energy of the system defining the direct problem. The strategy was applied for searching the thermal expansion coefficient (CTE) parameter in the Step-and-flash Imprint Lithography (SFIL) process. Keywords: Inverse problems, Finite Element Method, hp adaptivity, Molecular Statics.

1

Introduction

Inverse parametric problems belong to the group of heaviest computational tasks. Their solution require a sequence of direct problem solutions, e.g. obtained by Finite Element Method (FEM), thus the accuracy of the inverse problem solution is limited by the accuracy of the direct problem solution. We utilize the fully automatic hp FEM codes [6,3] generating a sequence of computational meshes delivering exponential convergence of the numerical error with respect to the mesh size for solving direct problems. Using the maximum accuracy for the direct problem solve by each iteration of inverse solver leads to needles computational costs (see e.g. [4]). A better strategy is to balance dynamically the accuracy of both iterations. However, to be able to execute such strategy we need to relate the error of optimization method defined as the uncorrectness of objective function value with the FEM solution error. We propose such relation and the detailed error balance strategy for the objective being the energy of the system that is described by the simple problem. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 342–349, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Efficient Adaptive Strategy for Solving Inverse Problems

343

The strategy is tested on the Step-and-Flash Impring Lithography (SFIL) simulations. The objective of the inverse analysis is to find value of the thermal expansion coefficient enforcing shrinkage of the feature well comparable with experimental data. The energy used for the error estimation of the objective function was obtained from the experimental data and static molecular model calculations [5].

2

The Automatic hp Adaptive Finite Element Method

Sequential and parallel 3D hp adaptive FEM codes [6], [3] generate in fully automatic mode a sequence of hp FE meshes providing exponential convergence of the numerical error with respect to size of the mesh (number of degrees of freedom, CPU time). Given an initial mesh, called the coarse mesh, presented

Fig. 1. The coarse mesh with p = 2 and fine mesh with p = 3 on all elements edges, faces, and interiors. The optimal meshes after the first, second and third iterations. Various colors denote various polynomial orders of approximation.

on the first picture in Fig. 1, with polynomial orders of approximations p = 2 on elements edges, faces and interiors, we first perform global hp refinement to produce the fine mesh presented on the second picture in Fig. 1, by breaking each element into 8 son elements, and increasing the polynomial order of approximation by one. The direct problem is solved on the coarse and on the fine mesh. The energy norm (see e.g. [1]) difference between coarse and fine mesh solutions is then utilized to estimate relative errors over coarse mesh elements. The optimal refinements are then selected and performed for coarse mesh elements with high relative errors. The coarse mesh elements can be broken into smaller son elements (this procedure is called h refinement) or the polynomial order of approximation can be increased on element edges, faces or interiors (this procedure is called p refinement), or both (this is called hp refinement). For each finite element from the coarse mesh we consider locally several possible h, p or hp refinements. For each finite element the refinement strategy providing maximum error decrease rate is selected. The error decrease rate rate =

uh/2,p+1 − uhp  − uh/2,p+1 − whp  . nrdofadded

(1)

is defined as relative error estimation in energy norm divided by number of degrees of freedom added. Here uhp stands for the coarse mesh solution, uh/2,p+1

344

M. Paszy´ nski, B. Barabasz, and R. Schaefer

for the fine mesh solution and whp is the solution corresponding to proposed refinement strategy, obtained by the projection based interpolation technique [2]. The optimal mesh generated in such a way becomes a coarse mesh for the next iteration, and the entire procedure is repeated as long as the global relative error estimation is larger then the required accuracy of the solution (see [6] for more details). The sequence of optimal meshes generated by automatic hp-adaptive code from the coarse mesh is presented on third, fourth and fifth pictures in Figure 1. The relative error of the solution goes down from 15% to 5%.

3

The Relation Between the Objective Function Error and the Finite Element Method Error

We assume the direct problem is modeled by the abstract variational equation  u ∈ u0 + V (2) b (u, v) = l (v) ∀v ∈ V where u0 is the lift of the Dirichlet boundary conditions [2]. Functionals b and l depend on the inverse problem parameters d. The variational problem (2) is equivalent with the minimization one (3), if b is symmetric and positive definite (see e.g. [2])  u ∈ u0 + V (3) E (u) = 12 b (u, u) − l (u) −→ min. where E (u) = 12 b (u, u)−l (u) is the functional of the total energy of the solution. The Problem (2) may be approximated using the FEM on the finite dimensional subspace Vh,p ⊂ V  uh,p ∈ u0 + Vh,p . (4) b (uh,p , vh,p ) = l (vh,p ) ∀vh,p ∈ Vh,p For a sequence of meshes generated by the self-adaptive hp FEM code, for every coarse mesh, a coarse mesh space is a subset of the corresponding fine mesh space, Vh,p ⊂ Vh/2,p+1 ⊂ V . An absolute relative FEM error utilized by the self-adaptive hp FEM code is defined as the energy norm difference between the coarse and fine mesh solutions errF EM = uh,p − uh/2,p+1 E .

(5)

The inverse problem can be formulated as     ˆ − J (d∗ ) | = limh→0,p→∞ mindk ∈Ω |Jh,p dk − J (d∗ ) | (6) ˆ : |Jh,p d F ind d where d∗ denotes exact parameters of the inverse problem (exact solution of the variational formulation for these parameters is well comparable with experiment data), dk denotes approximated parameters of the inverse problem, Ω is a set of all admissible parameters dk , J (d∗ ) = E (u (d∗ )) is the energy of the

Efficient Adaptive Strategy for Solving Inverse Problems

345

∗ ∗ exact solution  u (d ) of the variational problem (2) for exact  k  parameters d , k k is the energy of the solution uh,p d of the approxiJh,p d = E uh,p d mated problem (4) for approximated parameters dk . Objective function error is defined as an energy difference between the solution of the approximated problem (4) for approximated parameter dk and the exact solution of the problem (2) for exact parameter d∗ (assumed to be equal to the energy of the experiment)     (7) eh,p dk = |Jh,p dk − J (d∗ ) | .

In other words, the approximated parameter dk is placed into the approximated  formulation (4), the solution of the problem uh,p dk (which on dk ) is  kdepends  is computed. computed by FEM, and the energy of the solution E uh,p d   k  k   k   Lemma 1. 2 Jh,p d − Jh/2,p+1 d = uh,p d − uh/2,p+1 dk 2E  k  k        k   Jh/2,p+1 d = 2 E uh,p dk − E uh/2,p+1 d = Proof       : 2 kJh,p d  − b uh,p d , uh,p dk − 2l uh,p dk − b uh/2,p+1 dk , uh/2,p+1 dk +           k  k  2l uh/2,p+1 dk  = b uh,p dk , uh,p dk − b uh/2,p+1 d , u d + h/2,p+1  k  k k k = b uh,p d , uh,p d −   2l uh/2,p+1 d  − uh,p d    b uh/2,p+1 dk , uh/2,p+1 dk +2b uh/2,p+1 dk , uh/2,p+1 dk − uh,p dk =  k  k      , uh/2,p+1 dk − uh,p dk = b uh/2,p+1  k  d − uh,p d  uh,p d − uh/2,p+1 dk 2E where Vh,p ⊂ Vh/2,p+1 ⊂ V stand for the coarse and fine mesh subspaces.    k 1  k  k 2  k d −uh,p d E +|Jh,p d −J (d∗ ) |, Lemma 2. eh/2,p+1  k  d ≤ 2 uh/2,p+1  k where eh/2,p+1 d := |Jh/2,p+1 d − J (d∗ ) |.    k  k   ∗ Proof : eh/2,p+1 dk = |Jh/2,p+1 )| = |Jh/2,p+1 d − J (d d − Jh,p dk +     ∗ dk −Jh,p dk | + |Jh,p dk − J (d∗ ) | = ) | ≤ |Jh/2,p+1 Jh,p dk −J (d   1 k − uh,p dk 2E +|Jh,p dk − J (d∗ ) |.   2 uh/2,p+1 d The objective function error over the fine mesh is limited by the relative error of the coarse mesh with respect to the fine mesh, plus the objective function error over the coarse mesh.

4

Algorithm

Lemma 2 motivates the following algorithm relating the inverse error with the objective function error. We start with random initial values of the inverse problem parameters solve the problem on the coarse and fine FEM meshes compute FEM error inverse analysis loop Propose new values for inverse problem parameters

346

M. Paszy´ nski, B. Barabasz, and R. Schaefer

solve the problem on the coarse mesh Compute objective function error if (objective function error < const * FEM error) execute one step of the hp adaptivity, solve the problem on the new coarse and fine FEM meshes compute FEM error if (inverse error < required accuracy) stop end Inverse error estimation proven in Lemma 2 allows us to perform hp adaptation in the right moment. If the objective function error is much smaller than the FEM error, the minimization of the objective function error does not make sense on current FE mesh, and the mesh quality improvement is needed.

5

Step-and-Flash Imprint Lithography

The above algorithm will be tested on the SFIL process simulation. The SFIL is a modern patterning process utilizing photopolymerization to replicate the topography of a template into a substrate. It can be summarized in the following steps, compare Fig. 2: Dispense - the SFIL process employs a template / substrate alignment scheme to bring a rigid template and substrate into parallelism, trapping the etch barrier in the relief structure of the template; Imprint - the gap is closed until the force that ensures a thin base layer is reached; Exposure - the template is then illuminated through the backside to cure etch barrier; Separate - the template is withdrawn, leaving low-aspect ratio, high resolution features in the etch barrier; Breakthrough Etch - the residual etch barrier (base layer) is etched away with a short halogen plasma etch; Transfer Edge the pattern is transferred into the transfer layer with an anisotropic oxygen reactive ion etch, creating high-aspect ratio, high resolution features in the organic transfer layer. The photopolymerization of the feature is often accompanied by the densification, see Fig. 2, which can be modeled by the linear elasticity with thermal expansion coefficient (CTE) [5]. We may define the problem: Find u displacement vector field, such that  3  u ∈ V ⊂ H 1 (Ω) . (8) b (u, v) = l (v) ∀v ∈ V

3  where V = v ∈ H 1 (Ω) : tr(v) = 0 on ΓD , Ω ⊂ R3 stands for the cubicshape domain, ΓD is the bottom of the cube and H 1 (Ω) is the Sobolev space. b (u, v) = (Eijkl uk,l vi,j ) dx; l (v) = α vi,i dx. (9) Ω

Ω

Here Eijkl = μ (δik δjl + δil δjk ) + λδij δkl stands for the constitutive equation for the isotropic material, where μ and λ are Lame coefficients. The thermal expansion coefficient (CTE) α = VΔV ΔT is defined as a volumetric shrinkage of the edge barrier divided by 1 K.

Efficient Adaptive Strategy for Solving Inverse Problems

347

Fig. 2. Modeling of the Step-and-Flash Imprint Lithography process

6

Numerical Results

The proposed algorithm was executed for the problem of finding the proper value of the thermal expansion coefficient enforcing shrinkage of the feature comparable with experiments. The algorithm performed 43 iterations on the first optimal mesh (see the third picture in Fig. 1) providing 15% relative error of the direct problem solution. Then, the computational mesh was hp refined to increase the accuracy of the direct solver. The inverse algorithm continued by utilizing 8% relative error mesh (see the fourth picture in Fig.1) for the direct problem. After 39 iterations the mesh was again hp refined (see the fifth picture in Fig.1) to provide 5% relative error of the direct problem solution. After 35 iterations of the inverse algorithm on the most accurate mesh the inverse problem was solved. The history of the (CTE) parameter convergence on the first, second and third optimal meshes is presented in Fig. 3.

Fig. 3. History of convergence of CTE parameter on 3 meshes

We compared the total execution time equal to 0.1s + 43 × 2 × 0.1s + 1s + 39 × 2 × 1s + 10s + 35 × 2 × 10s = 8.7 + 79 + 710 = 797.7s with the classical algorithm, where the inverse problem was solved on the most accurate FEM mesh from

348

M. Paszy´ nski, B. Barabasz, and R. Schaefer

the beginning. The classical algorithm required 91 iterations to obtain the same result. The execution time of the classical algorithm was 10s + 91 × 2 × 10s = 1830s. This difference will grow when the inverse algorithm will look for more inverse problem parameters at the same time, since number of direct problem solutions necessary to obtain the new propositions of the inverse parameters will grow.

7

The Molecular Static Model

The energy of the experimental data J(d∗ ) was estimated from the molecular static model, which provides realistic simulation results, well comparable with experiments [5]. During the photopolymierization, the Van der Waals bound between particles forming polymer chain are converted into a stronger covalent bounds. The average distance between particles is decreasing and the volumetric contraction of the feature occurs. In the following, a general equations governing the equilibrium configurations of the molecular lattice structure after the densification and removing of the template are derived. Let us consider an arbitrary pair of bonded molecules with indices α and β xα , yˆα , zˆα ). The unknown equilibrium and given lattice position vector pα = (ˆ position vector of particle α, under the action of all their intermolecular bonds, is denoted xα = (xα , yα , zα ), the displacements from the initial position in the lattice to the equilibrium position is represented by the vector uα = xα − pα . Let  ·  denote the vector norm or length in R3 , let rαβ = xβ − xα  be the distance between particles α and β in initial configuration. Then, the force Fαβ , along the vector xβ − xα is governed by the potential function V (rαβ ), Fαβ = −

∂V (rαβ ) xβ − xα . ∂rαβ  xβ − xα 

(10)

where first term represents the magnitude and second term is the direction. If the indices of bonded neighboring particles of particle α are collected in the set Nα , then we obtain its force equilibrium by applying the following sum:

Fαβ = −

β∈Nα

∂V (rαβ ) xβ − xα =0. ∂rαβ  xβ − xα 

(11)

β∈Nα

The characteristics of the potential functions {V (rαβ )}β∈Nα are provided by the Monte Carlo simulation [5]. The covalent bounds are modeled by spring forces (12) Fαβ = C1 r + C2 . Spring like potential V (r) is quadratic. The Van der Waals bounds are model by non-linear forces and the Lennard-Jones potentials

 σ nαβ  σ mαβ  αβ αβ V (r) = Cαβ . (13) − r r where r = xβ − xα .

Efficient Adaptive Strategy for Solving Inverse Problems

349

The equilibrium equations are non-linear and the Newton-Raphson linearization procedure is applied to solve the system. The resulting shrinkage of the feature is presented in Figure 2.

8

Conclusions and Future Work

– The proper balance of errors of global optimization method and direct problem solvers allows for efficient speeding up the solution process of difficult inverse problems. The analytic relation among both errors is necessary. – The relation between the objective function error and the relative error of the hp-adaptive FEM has been derived. The objective error was expressed as the energy difference between the numerical solution and experiment data. – The strategy relating the convergence ratios of the inverse and direct problem solution has been proposed and successfully tested for searching value of the CTE parameter in the SFIL process. We obtained about 2.4 speedup in comparison to the solution without error balancing for the simple test example. The higher speedup may be expected for problems with larger dimension. – The future work will include derivation of analytic relations between the hpadaptive FEM error and objective function error defined in other ways. The possibilities of further speeding up of the solver will be tested by utilizing the parallel version of the hp-adaptive FEM codes [3]. Acknowledgments. The work reported in this paper was supported by Polish MNiSW grant no. 3 TO 8B 055 29.

References 1. Ciarlet P., The Finite Element Method for Elliptic Problems. North Holland, New York (1994) 2. Demkowicz L., Computing with hp-Adaptive Finite Elements, Vol. I. Chapman & Hall/Crc Applied Mathematics & Nonlinear Science, Taylor & Francis Group, Boca Raton London New York (2006) 3. Paszy´ nski, M., Demkowicz, L., Parallel Fully Automatic hp-Adaptive 3D Finite Element Package. Engineering with Computers (2006) in press. 4. Paszy´ nski, M., Szeliga, D., Barabasz, B. Maciol, P., Inverse analysis with 3D hp adaptive computations of the orthotropic heat transport and linear elasticity problems. VII World Congress on Computational Mechanics, Los Angeles, July 16-22 (2006) 5. Paszy´ nski, M., Romkes, A., Collister, E., Meiring, J., Demkowicz, L., Willson, C. G., On the Modeling of Step-and-Flash Imprint Lithography using Molecular Statics Models. ICES Report 05-38 (2005) 1-26 6. Rachowicz, W., Pardo D., Demkowicz, L., Fully Automatic hp-Adaptivity in Three Dimensions. ICES Report 04-22 (2004) 1-52

Topology Preserving Tetrahedral Decomposition of Trilinear Cell Bong-Soo Sohn Department of Computer Engineering, Kyungpook National University Daegu 702-701, South Korea [email protected] http://bh.knu.ac.kr/∼bongbong

Abstract. We describe a method to decompose a cube with trilinear interpolation into a set of tetrahedra with linear interpolation, where isosurface topology is preserved during decomposition for all isovalues. This method is useful for converting from a rectilinear grid into a tetrahedral grid in scalar data with topological correctness. We apply our method to topologically and geometrically accurate isosurface extraction. Keywords: volume visualization, isosurface, subdivision, topology.

1 Introduction Scientific simulations and measurements often generate a real-valued volumetric data in the form of function values sampled on a three dimensional (3D) rectilinear grid. Trilinear interpolation is a common way to define a function inside each cell of the grid. It is computationally simple and provides good estimation of a function between sampled points in practice. Isosurface extraction is one of the most common techniques for visualizing the volumetric data. An isosurface is a level set surface defined as I(w) = {(x, y, z)|F(x, y, z) = w} where F is a function defined from the data and w is an isovalue. The isosurface I is often polygonized for modeling and rendering purposes. We call I a trilinear isosurface to distinguish it from a polygonized isosurface when F is a trilinear function. Although the rectilinear volumetric data is the most common form, some techniques [9,4,3,1,5] require a tetrahedral grid domain due to its simplicity. In order to apply such techniques to rectilinear volumetric data, people usually decompose a cube into a set of tetrahedra where a function is defined by linear interpolation. The decomposition may significantly distort the function in terms of its level sets (e.g. isosurfaces) topology and geometry. See [2] for examples. 2D/3D meshes with undesirable topology extracted from the distorted function may cause a serious inaccuracy problem in various simulations such as Boundary Element and Finite Element Method, when the extracted meshes are used as geometric domains for the simulation [10]. We describe a rule that decomposes a cube into a set of tetrahedra without changing isosurface topology for all isovalues in the cube. The rule provides topological correctness to any visualization algorithms that run on tetrahedral data converted from rectilinear data. The key idea is to add saddle points and connect them to the vertices Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 350–357, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Topology Preserving Tetrahedral Decomposition of Trilinear Cell

351

of a cube to generate tetrahedra. In case there is no saddle point, we perform a standard tetrahedral decomposition method [2] without inserting any points. The tetrahedra set converted from a cube involves a minimal set of points that can correctly capture the level set topology of trilinear function because level set topology changes only at critical points. Then, we apply our method to topologically and geometrically accurate isosurface triangulation for trilinear volumetric data. The remainder of this paper is organized as follows. In section 2, we explain trilinear isosurface and its topology determination. In section 3, we describe topology preserving tetrahedral decomposition of a trilinear cell. Then, in section 4, we give applications and results. Finally, we conclude this paper in section 5.

2 Trilinear Isosurface Topology The function inside a cube, F c , is constructed by trilinear interpolation of values on eight vertices of the cube. F c (x, y, z) = F000(1 − x)(1 − y)(1 − z) + F001(1 − x)(1 − y)z + F010(1 − x)y(1 − z) + F011(1 − x)yz + F100x(1 − y)(1 − z) + F101x(1 − y)z + F110xy(1 − z) + F111xyz This means the function on any face of a cell, F f is a bilinear function computed from four vertices on the face. F f (x, y) = F00 (1 − x)(1 − y) + F01(1 − x)y + F10x(1 − y) + F11xy Saddle points, where their first partial derivative for each direction is zero, play important roles in determining correct topology of a trilinear isosurface in a cube. Computing f f the location of face and body saddles which satisfy Fx = Fy = 0 and Fxc = Fyc = Fzc = 0 respectively is described in [6] and [8]. Saddle points outside the cube is ignored. Marching Cubes (MC) [7] is the most popular method to triangulate a trilinear isosurface using sign configurations of eight vertices in a cube. It is well known that some of the sign configurations have ambiguities in determining contour connectivity. The papers [6,8] show that additional sign configurations of face and body saddle points can disambiguate the correct topology of a trilinear isosurface. Figure 1 shows every possible isosurface connectivity of trilinear function [6].

3 Topology Preserving Tetrahedral Decomposition In this section, we describe a rule for decomposing a cube with trilinear interpolation into a set of tetrahedra with linear interpolation where isosurface topology is preserved for all isovalues during the decomposition. The tetrahedral decomposition is consistent for entire rectilinear volumetric data in the sense that the tetrahedra are seamlessly matched on a face between any two adjacent cubes. The rule is based on the analysis of

352

B.-S. Sohn

Fig. 1. Possible isosurface topology of trilinear interpolant. Numberings are taken from [6].

face and body saddles in a cell (e.g. cube). It is easy to understand the overall idea by looking at 2D case in Figure 2, which is much simpler than 3D case. Let sb and s f be the number of body saddles and face saddles respectively. There are four cases based on the number of face saddles and body saddles, (i) sb = 0 and s f = 0, (ii) sb = 0 and 1 ≤ s f ≤ 4, (iii) sb = 1 and 0 ≤ s f ≤ 4, (iv) sb = 0 and s f = 6, and (v) 1 ≤ sb ≤ 2 and s f = 6. The case where the number of face saddles is six is the most complicated case that requires careful treatment. Note that the number of face saddles cannot be five. The number of body saddles cannot be two unless the number of face saddles is six. The decomposition rule for each case is as follows : – case (i) : Decompose a cube into 6 tetrahedra without inserting any point as in [2]. – case (ii) : Choose one face saddle and decompose a cube into five pyramids by connecting the face saddle into four corner vertices of each face except the face that contains the face saddle. If a face of a pyramid contains a face saddle, the face is decomposed into four triangles that share the face saddle and the pyramid is decomposed into four tetrahedra. If a face of a pyramid does not contain a face saddle, the pyramid is decomposed into two tetrahedra in a consistent manner. If the number of face saddles is three or four, we need to choose the second biggest face saddle. – case (iii) : Decompose a cube into six pyramids by connecting a body saddle to four corner vertices of each face of a cube. Like in (ii), if a face of a pyramid contains a face saddle, the face is decomposed into four triangles and the pyramid is decomposed into four tetrahedra. Otherwise, the pyramid is decomposed into two tetrahedra. – case (iv) : A diamond is created by connecting the six face saddles. The diamond is decomposed into four tetrahedra. Twelve tetrahedra are created by connecting two vertices of each twelve edge of a cube and two face saddles on the two faces which share the edge. Eight tetrahedra are created by connecting each eight face of the diamond and a corresponding vertex of the cube. This will decompose a cube into twenty four tetrahedra.

Topology Preserving Tetrahedral Decomposition of Trilinear Cell

(a)

353

(b)

Fig. 2. (a) Triangular decomposition of a face by connecting a face saddle to each edge of the face resolves an ambiguity in determining correct contour connectivity. (b) Triangular decomposition based on a saddle point preserves level sets topology of bilinear function for every case.

(a) sb = 0, s f = 2

(b) sb = 1, s f = 2

(c) sb = 0 , s f = 6

Fig. 3. Example of rules for topology preserving tetrahedral decomposition

– case (v) : Figure 4 shows the cell decomposition when there are two body saddles and six face saddles. It generates two pyramids and four prisms where pyramids and prisms are further decomposed into tetrahedra. Choosing any two parallel faces that are connected to body saddles to form pyramids is fine. We classify saddle points as three small face saddles, a small body saddle, a big body saddle, and three big face saddles based on increasing order of the saddle values. Let small/big corner vertex be the vertex adjacent to three faces with small/big face saddles. The two parallel faces with small and big face saddles are connected to small and big body saddles respectively to form the two pyramids. The four prisms are decomposed into tetrahedra in a way that the small corner vertex should not be connected to the big body saddle and the big corner vertex should not be connected to the small body saddle. To satisfy this constraint, two types of decomposition of a prism are possible as shown in Figure 4 (c). In case sb = 1, we consider a small or big body saddle moves to a face saddle of a pyramid that is connected to the body saddle and hence the pyramid is collapsed. In this case, the pair of parallel faces for forming the pyramids are chosen in a way that the face saddle of a collapsed pyramid should not be the smallest or the biggest face saddle. Figure 3 shows several examples of applying the tetrahedral decomposition rule to a cube with different number of body and face saddles.

354

B.-S. Sohn

(a)

(b)

(c)

Fig. 4. (a) Cell decomposition when two body saddles exist. Dark blue, green, light blue, magenta, brown, and red circles represent small corner vertex, small face and body saddles, big body and face saddles, and big corner vertex. (b) Isosurface topology for different isovalues in case (a).

(a)

(b)

Fig. 5. (a) Demo program of topology preserving tetrahedral decomposition (b) Modified cell decomposition and triangulation for each case that has ambiguity

In appendix, we give a brief proof why topology of level sets of trilinear function in a cube is preserved during the decomposition for each case. As shown in Figure 5(a), we implemented above method and produced a program that takes an isovalue and values of eight corner vertices of a cube, and displays tetrahedral decomposition with either a trilinear isosurface or a polygonized isosurface.

4 Trilinear Isosurface Triangulation Readers may think that isosurface extraction for each tetrahedron, which is called Marching Tetrahedra (MT), after the tetrahedral decomposition would be simple and

Topology Preserving Tetrahedral Decomposition of Trilinear Cell

355

natural way. However, this direct application of the tetrahedral decomposition may cause extra cost in terms of the number of generated triangles and speed because even very simple case (e.g. type 1 in Figure 1) requires tetrahedral decomposition. We exploit the benefits of MC - mesh extraction with small number of elements and with high visual fidelity, and MT - ambiguity removal and simplicity by performing only necessary subdivisions. We describe a modified cell decomposition method for resolving a triangulation ambiguity and reconstructing triangular approximation of an isosurface with correct topology. The main idea is to decompose a cube into a set of tetrahedral and pyramidal cells which do not cause a triangulation ambiguity while preserving contour topology in a cube. A facial ambiguity is resolved by decomposing a face cell into four triangles that share a face saddle point. Likewise, an internal ambiguity is resolved by decomposing a cube into six pyramids which share a body saddle point. If there is an internal ambiguity and the isosurface does not contain a tunnel shape (neck), a body saddle point is not required in the cell decomposition for reconstructing isosurface with correct topology. There are four cases : (a) no face ambiguity with no tunnel, (b) face ambiguity (less than six) with no tunnel, (c) tunnel, and (d) six face ambiguities with no tunnel. – case (a) : No decomposition is necessary. Just perform MC-type triangulation – case (b) : Choose a face saddle on a face with ambiguity and decompose a cube into five pyramids by connecting the face saddle into four corner vertices of each face except the face that contains the face saddle. If a face of a pyramid contains face ambiguity, the face is decomposed into four triangles that share the face saddle and the pyramid is decomposed into four tetrahedra. If the number of face ambiguity is three, we need to choose second face saddle. – case (c) : Decompose a cube into six pyramids by connecting a body saddle that is involved with a tunnel to four corner vertices of each face of a cube. Like in (b), if a face of a pyramid contains a face ambiguity, the face is decomposed into four triangles and the pyramid is decomposed into four tetrahedra. – case (d) : We perform the same decomposition as the case (iv) in section 3, except that we decompose the diamond into two pyramids instead of four tetrahedra. (13(a) in Figure 5(b)) Isosurface configurations 1, 2, 5, 8, 9, and 11 in Figure 1 are the only possible cases which do not have an ambiguity in a cube. No decomposition is necessary for the cases. Triangulations for such cubes are listed in [7]. We implemented the modified decomposition method and applied it to each case in Figure 1, and extracted a triangular isosurface. Figure 5 shows the modified cell decomposition and its triangulation for every possible configuration of a trilinear isosurface that has an ambiguity. It confirms that the triangular isosurface generated from the cell decomposition is topologically equivalent to a trilinear isosurface from trilinear function by the comparison of Figure 1 and Figure 5(b).

5 Conclusion We described a method for tetrahedral decomposition of a cube where level sets topology is preserved. We envision many visualization algorithms that can take only tetrahedral

356

B.-S. Sohn

grid data can utilize our method for dealing with trilinear volumetric data instead of using standard tetrahedral decomposition methods that may significantly distort level sets topology and geometry. Acknowledgments. The author is grateful to Prof. Chandrajit Bajaj who gave various inspirational ideas that are related to this work. This research was supported by Kyungpook National University Research Fund, 2006.

References 1. Bajaj, C. L., Pascucci V., Schikore D. R.: The contour spectrum. In IEEE Visualization Conference (1997) 167–1737 2. Carr, H., M¨oller, T., Snoeyink, J.: Simplicial subdivisions and sampling artifacts. In IEEE Visualization Conference (2001) 99–108 3. Carr, H., Snoeyink, J.: Path seeds and flexible isosurfaces using topology for exploratory visualization. In Proceedings of VisSym (2003) 49–58 4. Carr, H., Snoeyink, J., Axen, U.: Computing contour trees in all dimensions. Computational Geometry: Theory and Applications Vol. 24. (2003) 75–94 5. Edelsbrunner, H., Harer, J., Natarajan, V., Pascucci, V.: Morse complexes for piecewise linear 3-manifolds. In Proceeding of the 19-th ACM Symp. on Comp. Geom. (2003) 361–370 6. Lopes, A., Brodlie, K.: Improving the robustness and accuracy of the marching cubes algorithm for isosurfacing. IEEE Transactions on Visualization and Computer Graphics (2003) 19–26 7. Lorensen, W.E., Cline, H.E.: Marching cubes: A high resolution 3D surface construction algorithm. In ACM SIGGRAPH (1987) 163–169 8. Natarajan, B.K.: On generating topologically consistent isosurfaces from uniform samples. The Visual Computer, Vol. 11. (1994) 52–62 9. Nielson, G.M., Sung, J.: Interval volume tetrahedrization. In IEEE Visualization Conference (1997) 221–228 10. Zhang, J., Bajaj, C. L., Sohn, B.-S.: 3D finite element meshing from imaging data. Computer Methods in Applied Mechanics and Engineering (CMAME) Vol. 194. (2005) 5083–5106

Appendix: Proof of Topology Preservation During Decomposition We call the isosurface extracted from a set of tetrahedra as PL isosurface. The proof is done by showing that, for any isovalue in a cube, the numbers of PL isosurface components and trilinear isosurface components are the same and PL isosurface components are topologically equivalent to trilinear isosurface components. Note that PL isosurface inside a cube should be always a manifold (except for a degenerate case). There is no isolated closed PL isosurface component in a cube. First of all, it is easy to see that the decomposition preserves trilinear isosurface topology on each face. A face is decomposed into four triangles when there is a face saddle on the face and decomposed into two triangles when there is no face saddle. There are three possible contour connectivity except for the empty case where symmetric cases are ignored. Figure 2 shows that each isocontour connectivity of bilinear function on a face is preserved for any possible triangular mesh generated from our decomposition rule.

Topology Preserving Tetrahedral Decomposition of Trilinear Cell

357

We classify corner vertices and saddles into up-vertices and down-vertices based on the check whether a function value of a vertex is bigger than or lower than an isovalue. We use a term, component-vertices, to indicate either up-vertices or down-vertices that have bigger or same number of connected components compared to the other one. Except for the configurations 13.5.1 and 13.5.2 in Figure 1, a connected component of component-vertices uniquely represents an isosurface component. Consider cases (i), (ii), and (iii). If there is no hole, connected components of component-vertices on faces are separated each other inside a cube to form a simple sheet (disk) for each connected component, which is consistent with actual trilinear isosurface topology. If there is a hole, the connected components of component-vertices on faces are connected through a saddle point inside a cube to form a tunnel isosurface. The reason why we choose the second biggest face saddle in the case (ii) with three or four face saddles and (v) with one body saddle is to avoid connecting components of component-vertices inside a cube that needs to be separated. For example, if we choose the smallest or the biggest face saddle in the configuration 7.2, two components of component-vertices on faces of a cube can be connected through an edge and hence two separate isosurface components would be connected with a tunnel. In cases (iv) and (v) where the number of face saddles is six, the configurations except for 13.5.1 and 13.5.2 are proved in a similar way as the cases of (i), (ii), and (iii). The configurations 13.5.1 and 13.5.2 can be proved by taking out tetrahedra that contributes to the small isosurface component and apply the same proof of (i), (ii), and (iii) to the rest of isosurfaces for topological correctness.

FITTING: A Portal to Fit Potential Energy Functionals to ab initio Points Leonardo Pacifici2 , Leonardo Arteconi1 , and Antonio Lagan` a1 Department of Chemistry, University of Perugia, via Elce di Sotto, 8 06123 Perugia, Italy Department of Mathematics and Computer Science, University of Perugia via Vanvitelli, 1 06123 Perugia, Italy xleo,bodynet,[email protected] 1

2

Abstract. The design and the implementation in a Grid environment of an Internet portal devoted to best fitting potential energy functionals to ab initio data for few body systems is discussed. The case study of a generalized LEPS functional suited to fit reactive three body systems is discussed with an application to the NO2 system. Keywords: webportal, fitting, ab initio calculations, potential energy surfaces, multiscale simulations.

1

Introduction

Thanks to recent progress in computing technologies and network infrastructures it has become possible to assemble realistic accurate molecular simulators on the Grid. This has allowed us to develop a Grid Enabled Molecular Simulator (GEMS) [1,2,3,4] by exploiting the potentialities of the Grid infrastructure of EGEE [5]. The usual preliminary step of molecular approaches to chemical problems is the construction of a suitable potential energy surface (PES) out of the already available theoretical and experimental information on the electronic structure of the system considered. In the first prototype production implementation of GEMS, GEMS.0 [6], it is assumed that available information on the electronic structure of the system considered is formulated as a LEPS [7] PES. Unfortunately, extended use in dynamics studies have singled out the scarce flexibility of the LEPS functional in describing potential energy surfaces having bent (non collinear) minimum energy paths to reaction. To progress beyond the limits of GEMS.0 an obvious choice was, therefore, not only to derive the value of the LEPS parameters from ab initio estimates of the electronic energies but also to add further flexibility to the LEPS functional form. For the former goal an Internet portal, SUPSIM, has been assembled as already discussed in the literature [8]. In the present paper we discuss the latter goal of making more general the LEPS functional. The paper deals in section 2 with a generalization of the functional representation of the LEPS and in section 3 with the assemblage of an Internet portal, called FITTING, devoted to the fitting of the LEPS to ab initio data. Finally, in section 4 the case study of the N + O2 system is discussed. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 358–365, 2007. c Springer-Verlag Berlin Heidelberg 2007 

FITTING: A Portal to Fit Potential Energy Functionals to ab initio Points

2

359

A Generalization of the LEPS Potential

Most often molecular potentials are expressed as a sum of the various terms of a many body expansion [9,10]. In the case of three-atom systems such a sum is made by three one-body, three two-body and one three-body terms as follows: (1)

(1)

(1)

V (rAB , rBC , rAC ) = VA + VB + VC + (2)

(2)

(2)

VAB (rAB ) + VBC (rBC ) + VAC (rAC ) + (3)

VABC (rAB , rBC , rAC )

(1)

where the V (1) terms are the one-body ones (taken to be zero for atoms in ground state) while V (2) and V (3) terms are the two- and three-body ones and are usually expressed as polynomials in the related internuclear distances rAB , rBC and rAC . These polynomials are damped by proper exponential-like functions of the related internuclear distances in order to vanish at infinity. More recently, use has been also made of Bond Order (BO) variables [11,12]. The nij BO variable is related to the internuclear distance rij of the ij diatom as follows: nij = exp [−βij (rij − reij )]

(2)

In Eq. 2 βij and reij are adjustable parameters (together with Dij ) of the best fit procedure trying to reproduce theoretical and experimental information of the ij diatomic molecule using the model potential (2)

Vij (rij ) = Dij P (nij )

(3)

where P (nij ) is a polynomial in nij . The LEPS functional can be also written as a sum of two and three body BO terms. The usual LEPS can be written, in fact, as V (rAB , rBC , rAC ) =1 EAB +1 EBC +1 EAC − JAB − JBC − JAC  2 + J2 + J2 − J − JAB AB JBC − JAB JAC − JBC JAC BC AC

(4)

where the Jij terms are formulated as: Jij =

1 1 ( Eij − aij 3 Ei ) 2

(5)

with aij being an adjustable parameter (often expressed as (1 − Sij )/(1 + Sij ), where Sij is the Sato parameter) and 1 E and 3 E being second order BO polynomials of the Morse 1 Eij = Dij nij (nij − 2) (6) and antiMorse 3

Eij =

Dij nij (nij + 2) 2

(7)

360

L. Pacifici, L. Arteconi, and A. Lagan` a

Fig. 1. A pictorial view of an atom-diatom system

type, respectively. Because of the choice of truncating the polynomial of Eq. 3 to the second order, βij , reij and Dij correspond to the force constant, the equilibrium distance and the dissociation energy of the ij diatom, respectively. The two-body terms correspond therefore to the three 1 Eij Morse potentials. The three body component V (3) of the potential is then worked out by subtracting the three diatomic terms to the ab initio data. The resulting values of the three body term are then fitted by optimizing the value of the aij parameters which are taken to be constant in the usual LEPS functional. In our generalization, as proposed some years ago by Takayanagi and Sato [13] and by Brown et al. [14], the Sato variables Sij are made depend on the angle opposed to the bond considered (respectively γ, α and β as sketched in Fig. 1) to bear a kind of three body connotation. Accordingly, the aij coefficients of eq. 5 can be formulated as depending from the angle opposite to the ij diatom as follows: aab = cγ1 + cγ2 cosγ + cγ3 cos2 γ + cγ4 cos3 γ + cγ5 cos4 γ 2

3

3

4

(8)

abc = cα1 + cα2 cosα + cα3 cos α + cα4 cos α + cα5 cos α

(9)

aac = cβ1 + cβ2 cosβ + cβ3 cos2 β + cβ4 cos3 β + cβ5 cos4 β

(10)

The Internet Portal Structure

To handle the fitting procedure in an efficient way we developed a web interface (called FITTING) acting as an Internet portal and ensuring the advantages typical of a Grid based environment. This choice was motivated by the wish of being independent from the operating system available on the user side and therefore being able to modify and upgrade the software without the participation of the user. Other motivations were the user friendliness and the ubiquitous usability of the web graphical interfaces. For this purpose we created a cross-browser site using only server-side technologies. Accordingly, the end-user can utilize the FITTING web GUI (Graphical User Interface) by making use only of a W3Compliant web browser [15]. The related Web Environment was implemented using the following elements:

FITTING: A Portal to Fit Potential Energy Functionals to ab initio Points

361

1. A dynamic web server, based on the Apache Web [16] server containing the PHP4 module [17]. 2. An RDBMS (MySQL [18] in our case) that handles the user data and supports the authentication phase. The Portal was developed and tested using GPL Software and FreeSoftware (namely the Apache Web Server 1.3.32 and MySQL 4.1.3 powered by FreeBSD 5.4). Because of the complexity of the workflow of FITTING, we produced a set of dynamically generated pages according to the following scheme: 1. 2. 3. 4. 5.

registration of the user selection of the atoms and the functional form specification of the ab initio data specification of additional ab initio data generation of the best-fit parameters

These pages take care of managing the execution of the computational procedure by the Web server and help the user to define the input parameters of the fitting calculation through the GUI.

Fig. 2. Screenshot of a System configuration page of FITTING

As a first step the user registers through the GUI when first accessing the portal. After the verification of the identity, the user is assigned an account and the associated login and password. At this point the user can access the portal and run the fitting procedure. Because of the multiuser environment adopted, multiple requests to the web server are dealt using the Session support (enabled in PHP by default). In the second step, the user selects, using the same GUI, the atoms composing the triatomic system considered and the fitting functional form to be used (see Fig. 2). In the third step, the server creates a dynamic web page which prompts the user to supply the name of the file of the ab initio data to be used during the

362

L. Pacifici, L. Arteconi, and A. Lagan` a

Fig. 3. Screenshot of a System configuration page of FITTING

calculation. In the fourth step, the same page allows the user to insert new ab initio data. The page asks the files of diatomic ab initio data (from one to three depending on the symmetry of the investigated system), as illustrated in Fig. 3. These files contain the ab initio values arranged in a two column format (the first column contains the value of the internuclear distance while the second column contains the corresponding value of the diatomic ab initio potential energy). The page prompts also the request for a file containing the ab initio triatomic data. This file contains in the first three columns the value of the three internuclear distances and in the fourth column the value of the corresponding triatomic ab initio potential energy. It is possible also to introduce other potential energy values to enforce some specific features of the potential or to constrain some input parameters. These choices depend on the functional form adopted for the fitting. Finally, the best fit is carried out using the LMDER routine of MINPACK [20] which is based on an improved version of the Levemberg-Marquardt method [21] which solves non linear least squares problems. The calculated best-fit values are inserted, together with the already determined diatomic parameters, in the automatically generated source of the corresponding Fortran routine.

4

The N + O2 Case Study

As already mentioned, in order to test the developed procedure, we considered the N + O2 system for which a large set of accurate ab initio electronic energy values (calculated at both CASSCF and MR-SDCI level) are available from the literature [22]. CASSCF calculations were performed at various fixed values of ˆ β attack angle (β= 135◦, 110◦ , 90◦ , 70◦ , 45◦ ). For each value of β, a mathe NOO  trix of geometries corresponding to regularly spaced values of ρβ = n2N O + n2OO

FITTING: A Portal to Fit Potential Energy Functionals to ab initio Points

363

Fig. 4. Isoenergetic contours, plotted as a function of the nNO (y axis) and nOO (x axis) BO variables at β= 135◦ . Energy contours are drawn every 3 Kcal/mol.

Fig. 5. Minimum energy paths of the generalized LEPS calculated at β= 135◦ , 110◦ , 90◦ , 70◦ and 45◦ plotted as a function of θβ

(the radius of the polar version of the BO coordinates) and of the associated θβ =sin−1 (nNO /ρβ ) angle were considered for the ab initio calculations. Calculated CASSCF values were scaled to the MR-CI energies at the minimum of the fixed θβ cut of the ab initio values. Following the above mentioned procedure the asymptotic cuts of the ab initio points were fitted first to Morse diatomic potentials and the best-fit values of the parameters were used to compute the three body component of the potential. The computed three body component was then fitted using both three constant

364

L. Pacifici, L. Arteconi, and A. Lagan` a

Sato parameters (as in the usual extended LEPS functional) and the fifteen coefficients of our generalized angular dependent LEPS given in eqs. 8-10. Due also to the particular structure of the NO2 PES we found the extended LEPS based on three constant Sato parameters to be scarcely flexible and to lead to a root mean square deviation of about 3.0 eV. Moreover, the isoenergetic contour plots in general poorly reproduce the ab initio values and have a wrong topology. A much better reproduction of the ab initio data was obtained when using the generalized LEPS (the one which has angle dependent Sato parameters) which gave a root mean square deviation half that of the extended LEPS. This result, though still preliminary, can be considered highly satisfactory due to the fact that a non negligible fraction of the deviation is due to the already mentioned particular structure of the NO2 PES whose two body component is not well reproduced by a Morse functional. The definitely better quality of the fitting carried out using the generalized LEPS functional can also be appreciated by inspecting the isoenergetic contours drawn at different fixed values of β and comparing them with the ab initio values. In particular, they not only always reproduce the topology of the fixed angle ab initio values (see for example the contours calculated at β= 135◦ shown in Fig. 4) but they also reproduce in a quasi quantitative fashion the corresponding minimum energy paths (MEP). MEP plots (see Fig. 5) show, in fact, the large variability of the MEP and the peculiar double barrier structure of the MEP at some values of the approaching angle. Moreover, in agreement with the structure of the ab initio data we found also that when moving from large β values to 110◦ the barrier lowers to rise again in going from β=110◦ to β= 70◦ .

5

Conclusions

In this paper the use of angle dependent LEPS functionals is proposed out and the development of an Internet portal called FITTING, aimed at inserting its fitting to ab initio data as part of the Grid Enabled Molecular Simulator (GEMS) implemented within EGEE, is illustrated. Using FITTING it is now possible to perform ab initio simulations starting from the generation of the potential energy values (for which the portal SUPSIM is already available) and continuing with their representation by a proper functional form to be used in dynamical calculations. This completes the workflow of GEMS for establishing a service of validation of LEPS potentials. Future work will be concerned with a further generalization of the angle dependent LEPS for its use as a three body component of force fields used in molecular dynamics.

References 1. Gervasi, O., Lagan` a, A., Cicoria, D., Baraglia, R.: Animazione e Calcolo Parallelo per lo Studio delle Reazioni Chimiche Elementari, Pixel 10 (1994) 19-26. 2. Bolloni, A.: Tesi di Laurea, Universit` a di Perugia (1997).

FITTING: A Portal to Fit Potential Energy Functionals to ab initio Points

365

3. Gervasi, O., Lagan` a, A., Lobbiani, M.: Lecture Notes in Computer Science, 2331 (2002) 956-964. 4. Gervasi, O., Dittamo, C., Lagan` a, A. A Grid Molecular Simulator for E-Science, Lecture Notes in Computer Science, 3470 (2005) 16-22 5. The Enabling Grids for E-sciencE (EGEE) project (http://public.eu-egee.org) 6. EGEE review Conference, Geneva, February 2005. http://indico.cern.ch/conferenceDisplay.py?confId=a043803 7. Polanyi, J.C., Schreiber, J.L.: The Dynamics of Bimolecular Reactions in Physical Chemistry an Advanced Treatise, Vol. VI, Kinetics of Gas Reactions, Eyring, H., Jost, W., Henderson, D. Ed. (Academic Press, New York, 1974) p. 383. 8. Storchi, L., Tarantelli, F., Lagan` a, A.: Computing Molecular Energy surfaces on a Grid, Lecture Notes in Computer Science, 3980, 675-683, (2006). 9. Sorbie, K.S., Murrell, J.N., Mol. Phys. 29 (1975) 1387-1403. 10. Murrell, J.N., Carter, S., Farantos, S.C., Huxley, P., Varandas, A.J.C.: Molecular potential energy functions (Wiley, Chichester, 1984). 11. Lagan` a, A., Garc´ıa, E., Mol. Phys., 56 (1985) 621-628. 12. Lagan` a, A., Garc´ıa, E., Mol. Phys. 56 (1985) 629-641. 13. Takayanagi, T. and Sato, S., Chem. Phys. Lett., 144 (1988) 191-193 14. Brown, F.B., Steckler, R., Schwenke, D.W., Thrular, D.G. and Garrett, B.C., J. Chem. Phys., 82 (1985), 188. 15. World Wide Web Consortium (http://www.w3.org) 16. The Apache Software Foundation (http://www.apache.org) 17. PHP: Hypertext Preprocessor (http://www.php.net) 18. Popular Open Source Database (http://www.mysql.com) 19. Borgia, D.: Tesi di Laurea, Universit` a di Perugia (2006) 20. Mor`e, J. J., Garbow, B. S., Hillstrom, K. E.: Argonne National Laboratory, (1980); MINPACK package can be obtained from http://www.netlib.org/minpack. 21. Mor`e, J. J., in Numerical Analysis, Lecture Notes in Mathematics, 630 (1977), 105. 22. G. Suzzi Valli, R. Orr` u, E. Clementi, A. Lagan` a, S. Crocchianti, J. Chem. Phys., 102 (1995) 2825.

Impact of QoS on Replica Placement in Tree Networks Anne Benoit, Veronika Rehn, and Yves Robert Laboratoire LIP, ENS Lyon, France. UMR CNRS-INRIA-UCBL 5668 {Anne.Benoit|Veronika.Rehn|Yves.Robert}@ens-lyon.fr

Abstract. This paper discusses and compares several policies to place replicas in tree networks, subject to server capacity and QoS constraints. The client requests are known beforehand, while the number and location of the servers are to be determined. We study three strategies. The first two strategies assign each client to a unique server while the third allows requests of a client to be processed by multiple servers. The main contribution of this paper is to assess the impact of QoS constraints on the total replication cost. In this paper, we establish the NP-completeness of the problem on homogeneous networks when the requests of a given client can be processed by multiple servers. We provide several efficient polynomial heuristic algorithms for NP-complete instances of the problem. Keywords: Replica placement, QoS constraints, access policies, heterogeneous platforms, complexity, placement heuristics.

1

Introduction

This paper deals with the problem of replica placement in tree networks with Quality of Service (QoS) guarantees. Informally, there are clients issuing several requests per time-unit, to be satisfied by servers with a given QoS. The clients are known (both their position in the tree and their number of requests), while the number and location of the servers are to be determined. A client is a leaf node of the tree, and its requests can be served by one or several internal nodes. Initially, there are no replicas; when a node is equipped with a replica, it can process a number of requests, up to its capacity limit (number of requests served by time-unit). Nodes equipped with a replica, also called servers, can only serve clients located in their subtree (so that the root, if equipped with a replica, can serve any client); this restriction is usually adopted to enforce the hierarchical nature of the target application platforms, where a node has knowledge only of its parent and children in the tree. Every client has some QoS constraints: its requests must be served within a limited time, and thus the servers handling these requests must not be too far from the client. The rule of the game is to assign replicas to nodes so that some optimization function is minimized and QoS constraints are respected. Typically, this optimization function is the total utilization cost of the servers. In this paper we study this optimization problem, called Replica Placement, and we restrict Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 366–373, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Impact of QoS on Replica Placement in Tree Networks

367

the QoS in terms of number of hops This means for instance that the requests of a client who has a QoS range of qos = 5 must be treated by one of the first five internal nodes on the path from the client up to the tree root. We point out that the distribution tree (clients and nodes) is fixed in our approach. This key assumption is quite natural for a broad spectrum of applications, such as electronic, ISP, or VOD service delivery. The root server has the original copy of the database but cannot serve all clients directly, so a distribution tree is deployed to provide a hierarchical and distributed access to replicas of the original data. On the contrary, in other, more decentralized, applications (e.g. allocating Web mirrors in distributed networks), a two-step approach is used: first determine a “good” distribution tree in an arbitrary interconnection graph, and then determine a “good” placement of replicas among the tree nodes. Both steps are interdependent, and the problem is much more complex, due to the combinatorial solution space (the number of candidate distribution trees may well be exponential). Many authors deal with the Replica Placement optimization problem. Most of the papers do not deal with QoS but instead consider average system performance such as total communication cost or total accessing cost. Please refer to [2] for a detailed description of related work with no QoS contraints. Cidon et al [4] studied an instance of Replica Placement with multiple objects, where all requests of a client are served by the closest replica (Closest policy). In this work, the objective function integrates a communication cost, which can be seen as a substitute for QoS. Thus, they minimize the average communication cost for all the clients rather than ensuring a given QoS for each client. They target fully homogeneous platforms since there are no server capacity constraints in their approach. A similar instance of the problem has been studied by Liu et al [7], adding a QoS in terms of a range limit, and whose objective is to minimize the number of replicas. In this latter approach, the servers are homogeneous, and their capacity is bounded. Both [4,7] use a dynamic programming algorithm to find the optimal solution. Some of the first authors to introduce actual QoS constraints in the problem were Tang and Xu [9]. In their approach, the QoS corresponds to the latency requirements of each client. Different access policies are considered. First, a replica-aware policy in a general graph with heterogeneous nodes is proven to be NP-complete. When the clients do not know where the replicas are (replicablind policy), the graph is simplified to a tree (fixed routing scheme) with the Closest policy, and in this case again it is possible to find an optimal dynamic programming algorithm. In [10], Wang et al deal with the QoS aware replica placement problem on grid systems. In their general graph model, QoS is a parameter of communication cost. Their research includes heterogeneous nodes and communication links. A heuristic algorithm is proposed and compared to the results of Tang and Xu [9]. Another approach, this time for dynamic content distribution systems, is proposed by Chen et al [3]. They present a replica placement protocol to build a dissemination tree matching QoS and server capacity constraints. Their work

368

A. Benoit, V. Rehn, and Y. Robert

focuses on Web content distribution built on top of peer-to-peer location services: QoS is defined by a latency within which the client has to be served, whereas server capacity is bounded by a fan-out-degree of direct children. Two placement algorithms (a native and a smart one) are proposed to build the dissemination tree over the physical structure. In [2] we introduced two new access policies besides the Closest policy. In the first one, the restriction that all requests from a given client are processed by the same replica is kept, but client requests are allowed to “traverse” servers in order to be processed by other replicas located higher in the path (closer to the root). This approach is called the Upwards policy. In the second approach, access constraints are further relaxed and the processing of a given client’s requests can be split among several servers located in the tree path from the client to the root. This policy with multiple servers is called Multiple. In this paper we study the impact of QoS constraints on these three policies. On the theoretical side we prove the NP-completeness of Multiple/Homogeneous instance with QoS constraints, while the same problem was shown to be polynomial without QoS [2]. This result shows the additional combinatorial difficulties which we face when enforcing QoS constraints. On the practical side, we propose several heuristics for all policies. We compare them through simulations conducted for problem instances with different ranges of QoS constraints. We are also able to assess the absolute performance of the heuristics, by comparing them to the optimal solution of the problem provided by a formulation of the Replica Placement problem in terms of a mixed integer linear program. The solution of this program allows us to build an optimal solution [1] for reasonably large problem instances.

2

Framework and Access Policies

We consider a distribution tree T whose nodes are partitioned into a set of clients C and a set of nodes N . The clients are leaf nodes of the tree, while N is the set of internal nodes. A client i ∈ C is making ri requests per time unit to a database, with a QoS qosi : the database must be placed not further than qosi hops on the path from the client to the root. A node j ∈ N may or may not have been provided with a replica of the database. A node j equipped with a replica (i.e. j is a server) can process up to Wj requests per time unit from clients in its subtree. In other words, there is a unique path from a client i to the root of the tree, and each node in this path is eligible to process some or all the requests issued by i when provided with a replica. We denote by R ⊆ N the set of replicas, and Servers(i) ⊆ R is the set of nodes which are processing requests from client i. The number of requests from client i satisfied by server s is ri,s , and the number of hops between i and j ∈ N is denoted by d(i, j). Two constraints must be satisfied:  – Server capacity: ∀s ∈ R, i∈C|s∈Servers(i) ri,s ≤ Ws – QoS constraint: ∀i ∈ C, ∀s ∈ Servers(i), d(i, s) ≤ qosi

Impact of QoS on Replica Placement in Tree Networks

369

The  objective function for the Replica Placement problem is defined as: Min s∈R Ws . When the servers are homogeneous, i.e. ∀s ∈ N , Ws = W, the optimization problem reduces to finding a minimal number of replicas. This problem is called Replica Counting. We consider three access policies in this paper. The first two are single server strategies, i.e. each client is assigned a single server responsible for processing all its requests. The Closest policy is the most restricted one: the server for client i is enforced to be the first server that can be found on the path from i upwards to the root. Relaxing this constraint leads to the Upwards policy. Clients are still assigned to a single server, but their requests are allowed to traverse one or several servers on the way up to the root, in order to be processed by another server closer to the root. The third policy is a multiple server strategy and hence a further relaxation: a client i may be assigned a set of several servers. Each server s ∈ Servers(i) will handle a fraction ri,s of requests. Of course  s∈Servers(i) ri,s = ri . This policy is referred to as the Multiple policy.

3

Complexity Results

Table 1 gives an overview of complexity results of the different instances of the Replica Counting problem (homogeneous servers). Liu et al [7] provided a polynomial algorithm for the Closest policy with QoS constraints. In [2] we proved the NP-completeness of the Upwards policy without QoS. This was a surprising result, to be contrasted with the fact that the Multiple policy is polynomial under the same conditions [2]. Table 1. Complexity results for the different instances of Replica Counting

Closest Upwards Multiple

Homogeneous polynomial [4,7] NP-complete [2] polynomial [2]

Homogeneous/QoS polynomial [7] NP-complete [2] NP-complete (this paper)

An important contribution of this paper is the NP-completeness of the Multiple policy with QoS constraints. As stated above, the same problem was polynomial without QoS, which gives a clear insight on the additional complexity introduced by QoS constraints. The proof uses a reduction to 2-PARTITIONEQUAL [5]. Due to a lack of space we refer to the extended version of this paper [1] for the complete proof. Theorem 1. The instance of the Replica Counting problem with QoS constraints and the Multiple strategy is NP-complete. Finally, we point out that all three instances of the Replica Placement problem (heterogeneous servers with the Closest , Upwards and Multiple policies) are already NP-complete without QoS constraints [2].

370

4

A. Benoit, V. Rehn, and Y. Robert

Heuristics for the Replica Placement Problem

In this section several heuristics for the Closest , Upwards and Multiple policies are presented. As already pointed out, the quality of service is the number of hops that requests of a client are allowed to traverse until they have to reach their server. The code and some more heuristics can be found on the web [8]. All heuristics described below have polynomial, and even worst-case quadratic, complexity O(s2 ), where s = |C| + |N | is the problem size. In the following, we denote by inreqQoSj the amount of requests that reach an inner node j within their QoS constraints, and by inreqj the total amount of requests that reach j (including requests whose QoS constraints are violated). Closest Big Subtree First - CBS. Here we traverse the tree in top-down manner. We place a replica on an inner node j if inreqQoSj ≤ Wj . When the condition holds, we do not process any other subtree of j. If this condition does not hold, we process the subtrees of j in non-increasing order of inreqj . Once no further replica can be added, we repeat the procedure. We stop when no new replica is added during a pass. Upwards Small QoS Started Servers First - USQoSS. Clients are sorted by non-decreasing order of qosi (and non-increasing order of ri in case of tie). For each client i in the list we search for an appropriate server: we take the next server on the way up to the root (i.e. an inner node that is already equipped with a replica) which has enough remaining capacity to treat all the client’s requests. Of course the QoS-constraints of the client have to be respected. If there is no server, we take the first inner node j that satisfies Wj ≥ ri within the QoS-range and we place a replica in j. If we still find no appropriate node, this heuristic has no feasible solution. Upwards Minimal Distance - UMD. This heuristic requires two steps. In the first step, so-called indispensable servers are chosen, i.e. inner nodes which have a client that must be treated by this very node. At the beginning, all servers that have a child client with qos = 1 will be chosen. This step guarantees that in each loop of the algorithm, we do not forget any client. The criterion for indispensable servers is the following: for each client check the number of nodes eligible as servers; if there is only one, this node is indispensable and chosen. The second step of UMD chooses the inner node with minimal (Wj − inreqQoSj )-value as server (if inreqQoSj > 0). Note that this value can be negative. Then clients are associated to this server in order of distance, i.e. clients that are close to the server are chosen first, until the server capacity Wj is reached or no further client can be found. Multiple Small QoS Close Servers First - MSQoSC. The main idea of this heuristic is the same as for USQoSS, but with two differences. Searching for an appropriate server, we take the first inner node on the way up to the root which has some remaining capacity. Note that this makes the difference between close and started servers. If this capacity Wi is not sufficient (client c has more requests, Wi < rc ), we choose other inner nodes going upwards to the root until all requests of the client can be processed (this is possible owing to

Impact of QoS on Replica Placement in Tree Networks

371

the multiple-server relaxation). If we cannot find enough inner nodes for a client, this heuristic will not return a feasible solution. Multiple Small QoS Minimal Requests - MSQoSM. In this heuristic clients are treated in non-decreasing order of qosi , and the appropriate servers j are chosen by minimal (Wj − inreqQoSj )-value until all requests of clients can be processed. Multiple Minimal Requests - MMR. This heuristic is the counterpart of UMD for the Multiple policy and requires two steps. policy: servers Servers are added in the “indispensable” step, either when they are the only possible server for a client, or when the total capacity of all possible inner nodes for a client i is exactly ri . The server chosen in the second step is also the inner node with minimal (Wj − inreqQoSj )-value, but this time clients are associated in non-decreasing order of min(qosi , d(i, r)), where d(i, r) is the number of hops between i and the root of the tree. Note that the last client that is associated to a server, might not be processed entirely by this server. Mixed Best - MB. This heuristic unifies all previous ones, including those presented in [1]. For each tree, we select the best cost returned by the other heuristics. Since each solution for Closest is also a solution for Upwards, which in turn is a valid solution for Multiple, this heuristic provides a solution for the Multiple policy.

5

Experimental Plan

In this section we evaluate the performance of our heuristics on tree platforms with varying parameters. Through these experiments we want to assess the different access policies, and the impact of QoS constraints on the performance of the heuristics. We obtain an optimal solution for each tree platform with the help of a mixed integer linear program, see [1] for further details. We can compute the latter optimal solution for problem sizes up to 400 nodes and clients, using GLPK [6]. An important parameter in our tree networks is the load, i.e. the total number ri i∈C of requests compared to the total processing power: λ = Wj , where C is j∈N the set of clients in the tree and N the set of inner nodes. We tested our heuristics for λ = 0.1, 0.2, ..., 0.9, each on 30 randomly generated trees of two heights: in a first series, trees have a height between 4 and 7 (small trees). In the second series, tree heights vary between 16 and 21 (big trees). All trees have s nodes, where 15 ≤ s ≤ 400. To assess the impact of QoS on the performance, we study the behavior (i) when QoS constraints are very tight (qos ∈ {1, 2}); (ii) when QoS constraints are more relaxed (the average value is set to half of the tree height height); and (iii) without any QoS constraint (qos = height + 1). We have computed the number of solutions for each λ and each heuristic. The number of solutions obtained by the linear program indicates which problems are solvable. Of course we cannot expect a result with our heuristics for intractable problems. To assess the performance of our heuristics, we have studied the relative performance of each heuristic compared to the optimal solution. For each λ,

 

372

A. Benoit, V. Rehn, and Y. Robert

Fig. 1. Relative performance of small trees with (a) tight, (b) medium, (c) no QoS constraints. (d) Big trees with medium QoS constraints.

the cost is computed on those trees for which the linear program has a solution. Let Tλ be the subset of trees with a LP solution. Then, the relative performance  LP (t) for the heuristic h is obtained by |T1λ | t∈Tλ cost costh (t) , where costLP (t) is the optimal solution cost returned by the linear program on tree t, and costh (t) is the cost involved by the solution proposed by heuristic h. In order to be fair versus heuristics that have a higher success rate, we set costh (t) = +∞, if the heuristic did not find any solution. Figure 1 gives an overview of our performance tests (see [1] for the complete set of results). The comparison between Fig. 1a and 1c shows the impact of QoS on the performance. The impact of the tree sizes can be seen by comparing Fig. 1b and 1d. Globally, all the results show that QoS constraints do not modify the relative performance of the three policies: with or without QoS, Multiple is better than Upwards, which in turn is better than Closest , and their difference in performance is not sensitive to QoS tightness or to tree sizes. This is an enjoyable result, that could not be predicted a priori. The MB heuristic returns very good results, being relatively close to the optimal in most cases. The best heuristic to use depends on the tightness of QoS constraints. Thus, for Multiple, MSQoSM is the best choice for tight QoS constraints and small λ (Fig. 1a). When QoS is less constrained, MMR is the best for λ up to 0.4. For big λ, MSQoSC is to prefer, since it never performs poorly in this case. Concerning the Upwards policy, USQoSS behaves the best for tight QoS, in the other cases UMD achieves better results. We kept only our best Closest heuristic on these curves, which outperforms the others [1].

Impact of QoS on Replica Placement in Tree Networks

6

373

Conclusion

In this paper we have dealt with the Replica Placement optimization problem with QoS constraints. We have proven the NP-completeness of Multiple with QoS constraints on homogeneous platforms, and we have proposed a set of efficient heuristics for the Closest , Upwards and Multiple access policies. To evaluate the absolute performance of our algorithms, we have compared the experimental results to the optimal solution of an integer linear program, and these results turned out quite satisfactory. In our experiments we have assessed the impact of QoS constraints on the different policies, and we have discussed which heuristic performed best depending upon problem instances, platform parameters and QoS tightness. We have also showed the impact of platform size on the performances. Although we studied the problem with a restricted QoS model, but we expect experimental results to be similar for more general QoS constraints. As for future work, bandwidth and communication costs could be included in the experimental plan. Also the structure of the tree networks has to be studied more precisely. In this paper we have investigated different tree heights, but it would be interesting to study the impact of the average degree of the nodes onto the performance. In a longer term, the extension of the Replica Placement optimization problem to various object types should be considered, which would call for the design and evaluation of new efficient heuristics.

References 1. Benoit, A., Rehn, V., Robert, Y.: Impact of QoS on Replica Placement in Tree Networks. Research Report 2006-48, LIP, ENS Lyon, France, available at graal.ens-lyon.fr/∼ yrobert/ (2006). 2. Benoit, A., Rehn, V., Robert, Y.: Strategies for Replica Placement in Tree Networks. Research Report 2006-30, LIP, ENS Lyon, France, available at graal.ens-lyon.fr/∼ yrobert/ (2006) 3. Chen, Y., Katz, R.H, Kubiatowicz, J.D.: Dynamic Replica Placement for Scalable Content Delivery. In: Peer-to-Peer Systems: First International Workshop, IPTPS 2002, Cambridge, MA, USA (2002) 306–318 4. Cidon, I., Kutten, S., Soffer, R.: Optimal allocation of electronic content. Computer Networks 40 (2002) 205–218 5. Garey, M.R., Johnson, D.S.: Computers and Intractability, a Guide to the Theory of NP-Completeness. W.H. Freeman and Company (1979) 6. GLPK: GNU Linear Programming Kit. http://www.gnu.org/software/glpk/ 7. Liu, P., Lin, Y.F., Wu, J.J.: Optimal placement of replicas in data grid environments with locality assurance. In: International Conference on Parallel and Distributed Systems (ICPADS). IEEE Computer Society Press, (2006) 8. Source Code for the Heuristics. http://graal.ens-lyon.fr/ vrehn/code/ replicaQoS/ 9. Tang, X., Xu, J.: QoS-Aware Replica Placement for Content Distribution. IEEE Trans. Parallel Distributed Systems 16(10) (2005) 921–932 10. Wang, H., Liu, P., Wu, J.J.: A QoS-aware Heuristic Algorithm for Replica Placement. In: Proceedings of the 7th International Conference on Grid Computing, GRID2006. IEEE Computer Society, (2006) 96–103

Generating Traffic Time Series Based on Generalized Cauchy Process Ming Li1, S.C. Lim2, and Huamin Feng3 1

School of Information Science & Technology, East China Normal University, Shanghai 200062, PR. China [email protected], [email protected] 2 Faculty of Engineering, Multimedia University, 63100 Cyberjaya, Selanger, Malaysia [email protected] 3 Key Laboratory of Security and Secrecy of Information, Beijing Electronic Science and Tech ology Institute, Beijing 100070, PR. China [email protected]

Abstract. Generating traffic time series (traffic for short) is important in networking, e.g., simulating the Internet. In this aspect, it is desired to generate a time series according to a given correlation structure that may well reflect the statistics of real traffic. Recent research of traffic modeling exhibits that traffic is well modeled by a type of Gaussian process called the generalized Cauchy (GC) process indexed by two parameters that separately characterize the self-similarity (SS), which is local property described by fractal dimension D, and long-range dependence (LRD), which is a global feature that can be measured by the Hurst parameter H, instead of using the linear relationship D = 2 − H as that used in traditional traffic model with a single parameter such as fractional Gaussian noise (FGN). This paper presents a computational method to generate series based on the correlation form of GC process indexed by 2 parameters. Hence, the present model can be used to simulate realizations that flexibly capture the fractal phenomena of real traffic for both short-term lags and long-term lags. Keywords: Random data generation, network traffic, self-similarity, long-range dependence, generalized Cauchy process.

1 Introduction Network traffic is a type of fractal series with both (local) self-similarity (SS) and long-range dependence (LRD). Hence, it is a common case to exhibit fractal phenomena of time series, see e.g. [1], [2] and references therein. Its simulation is greatly desired in the Internet communications [3]. Conventional method to generate random series is either based on a given probability density function, see e.g. [4], or a given power spectrum, see e.g. [5]. For traffic simulation, however, it is expected to accurately synthesize data series according to a predetermined correlation structure [3]. This is because not only the autocorrelation function (ACF) of traffic with LRD is an ordinary function while the power spectrum of a series with LRD is a generalized function but also ACF of arrival traffic greatly impacts the performances of queuing Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 374 – 381, 2007. © Springer-Verlag Berlin Heidelberg 2007

Generating Traffic Time Series Based on Generalized Cauchy Process

375

systems [6]. In addition, performance analysis desires to accurately know how one packet arriving at time t statistically correlates to the other arriving at t + τ apart in future as remarked in [3, First sentence, Subparagraph 4, Paragraph 2, Section 6.1]. Therefore, this paper focuses on correlation based computational method. As known, the statistics of synthesized traffic relies on traffic correlation model used in simulation. FGN with a single parameter is a widely used traditional traffic model, see e.g. [7], [8], [9]. Hence, traditional methods to synthesize traffic are based on FGN with a single parameter, see e.g. [10], [11], [12], [13], [14]. The realizations based on those methods, therefore, only have the statistical properties of FGN with a single parameter, which characterizes SS and LRD by the linear relationship D = 2 − H . Recall that SS and LRD are two different concepts, see e.g. [15], [16], [17], [18]. In fact , let X (t ) be traffic series. Then, X (t ) being of SS with SS index κ means

X (at )  aκ X (t ) , a > 0,

(1)

where  denotes equality in finite joint finite distribution. On the other hand, X (t ) is of LRD if its ACF, r (τ ), is non-summable. That is, r (τ ) follows power law given by r (τ ) ~ cτ − β (τ → ∞ ), c > 0, 0 0, [16], [17], [18]. Note that r (τ ) is positive-definite for the above ranges of α and β , and it is a completely monotone for 0 < α ≤ 1, β > 0. When α = 2 and β = 1, one gets the usual Cauchy process. Clearly the GC process satisfies the LRD property for β < 1 since ∞



0

0

(

∫ r (τ )dτ = ∫ 1 + τ

)

α −β /α

dτ = ∞ if β < 1.

(5)

We note that the GC process is locally self-similar as can be seen from the following. As a matter of fact, it is a Gaussian stationary process and it is locally self-similar of order α since its ACF satisfies for τ → 0, r (τ ) = 1 −

{ ( )} ,

β α τ 1+ O τ α

γ

γ > 0.

(6)

The above expression is equivalent to the following more commonly used definition of locally self-similarity of a Gaussian process. X (t ) − X (aτ )  aκ [ X (t ) − X (τ )] , τ → 0.

(7)

The equivalence can be shown by noting that for τ 1 and τ 2 → 0, (6) gives for

κ =α /2 E [ ( X (t + bτ 1 ) − X (t ))( X (t + bτ 2 ) − X (t )) ] =

(

) {

β α

⎡ bτ 1 α + bτ 2 α − b(τ 1 − τ 2 ) α ⎤ ⎣ ⎦

β bα α α α τ 1 + τ 2 − τ 1 − τ 2 = E ⎡⎣bα / 2 ( X (t + τ 1 ) − X (t )) ⎤⎦ ⎡⎣bα / 2 ( X (t + τ 2 ) − X (t )) ⎤⎦ . α In order to determine the fractal dimension of the graph of X (t ), we consider the local property of traffic. The fractal dimension D of a locally self-similar traffic of order α is given by (see [17], [18], [23]). =

}

Generating Traffic Time Series Based on Generalized Cauchy Process

D = 2−

α 2

377

.

(8)

.

(9)

From (2), therefore, one has H = 1−

β 2

From (8) and (9), we see that D and H may vary independently. X (t ) is of LRD for β ∈ (0, 1) and SRD for β > 1. Thus, we have the fractal index α which determines the fractal dimension, and the index β that characterizes the LRD. In the end of this section, we discuss estimates of D and H of a GC process for the purpose of completing the description of GC processes though the focus of this paper does not relate to their estimators. There are some techniques that are popularly used to estimate D, such as boxcounting, spectral, and variogram-based methods, see e.g. [15], [24], [25]. Nevertheless, some of the more popular methods, e.g., box-counting, suffer from biases [26]. The method worth noting is called the variogram estimator that was explained in [27]. Denote γ (d ) the observed mean of the square of the difference between two values of the series at points that are distance d apart. Then, this estimator has the scaling law given by log γ (d ) = constant + α log d + error for d → 0.

(10)

The above scaling law is suitable for stationary processes satisfying 1 − r (τ ) ~ τ

−α

for τ → 0. The variogram estimator of D is expressed by Dˆ = 2 − αˆ / 2, where αˆ is the slope in a log-log plot of γ ( d ) versus d . The reported estimators of H are rich, such as R / S analysis, maximum likelihood method, and so forth [2], [15], [28]. The method worth mentioning is called the detrended fluctuation analysis introduced in [29], [30]. With this technique, a series is partitioned into blocks of size m. Within each block, least-square fitting is used. Denote v(m) the average of the sample variances. Then, the detrended fluctuation analysis is based on the following scaling law. log v(m) = constant + (2 − β ) log m + error for m → ∞.

(11)

The above is applicable for stationary processes satisfying (2), see [28] for details. The estimate Hˆ of H is half the slope in a log-log plot of v( m) versus m.

3 Computational Model Let w(t ), W (ω ) and Sw (ω ) be a white noise function, its spectrum and the power spectrum, respectively. Then, W(ω ) = F [ w(t )] and Sw (ω ) = WW = Constant, where W is

378

M. Li, S.C. Lim, and H. Feng

the complex conjugation of W and F is the operator of Fourier transform. Suppose w(t ) is the unit white noise. Then, Sw (ω ) = 1. Let h(t ) and H (ω ) respectively be the impulse function and system function of a linear filter, which we call simulator in this paper. Denote y (t ) the response under the excitation of w(t ). Then, y = w ∗ h, where ∗ means the operation of convolution. Denote S y (ω ) the power spectrum of y. Then, under the excitation of w, one has S y (ω ) = H (ω ) . Let y be the GC process X . Then, H (ω ) = S X (ω ). 2

Denote ψ (ω ) the phase function of H (ω ). Then, H (ω ) = S X (ω )e − jψ (ω ) , where

(

α S X (ω ) = F ⎡⎢ 1 + t ⎣

)

−β /α

⎤. ⎦⎥

Without losing the generality, let ψ (ω ) = 2nπ (n = 0, 1,"). Therefore, the impulse function of the simulator to generate traffic following GC processes under the excitation of white noise is α ⎪⎧ h(t ) = F −1 ⎨ ⎡⎢ F 1 + t ⎪⎩ ⎣

(

)

−β /α

⎤ ⎥⎦

0.5

⎪⎫ ⎬, ⎭⎪

(12)

where F −1 is the inverse of F. Consequently, the output of simulator, i.e., the synthesized traffic obeying GC processes is given by α ⎪⎧ X (t ) = w(t ) ∗ F −1 ⎨ ⎡⎢ F 1 + t ⎪⎩ ⎣

(

)

−β /α

⎤ ⎥⎦

0.5

⎪⎫ ⎬. ⎭⎪

(13)

Expressing α and β by D and H , we have ⎧⎡ 4−2 D ⎪ X (t ) = w(t ) ∗ F ⎨ ⎢ F 1 + t ⎩⎪ ⎣⎢ −1

(

)



1− H 2− D

⎤ ⎥ ⎦⎥

0.5

⎫ ⎪ ⎬. ⎭⎪

(14)

In the above, w(t ) = F −1[W (ω )], where W (ω ) = e jϑ (ω ) , where θ is a real random function with arbitrary distribution. In practice, traffic is band-limited. Thus, let jφ (ω ) , ω ≤ ωc ⎪⎧e W (ω ) = ⎨ , ⎪⎩ 0, otherwise

(15)

where φ (ω ) is a real random function with arbitrary distribution and cutoff frequency ωc is such that it completely covers the band of the traffic of interest. In the discrete case, w(n) = IFFT[W (ω )], where IFFT represents the inverse of FFT (fast Fourier transform). Fig. 1 indicates the computation procedure.

Generating Traffic Time Series Based on Generalized Cauchy Process

379

4 Case Study Simulated realizations are shown in Figs. 2-3. Due to the advantage of the separated characterizations of D and H by using the Cauchy correlation model, we can observe the distinct effects of D and H . In Fig. 2, the Hurst parameter H is constant 0.75 but the fractal dimension D decreases, (D = 1.95 and 1.20). Comparing Fig. 2 (a) and (b), we can see the realization in Fig. 2 (a) is rough while that in Fig. 2 (b) smooth. In Fig. 3, fractal dimension D is constant 1 while H decreases (H = 0.95 and 0.55). From Fig. 3, we can evidently see the stronger persistence (i.e., stronger LRD) in (a) than that in (b). Input values of D and H of Cauchy correlation

Input a random phase sequence φ

Computing impulse function of simulator h(n) according (12)

Computing white noise sequence w(n)

Convoluting w(n) with h(n) Synthesized generalized Cauchy sequence X

x(i)

Fig. 1. Computation flow chart

i

x(i)

(a)

i

(b) Fig. 2. Realizations. (a) D = 1.95, H = 0.75. (b) D = 1.20, H = 0.75.

M. Li, S.C. Lim, and H. Feng

x(i)

380

i

x(i)

(a)

i

(b) Fig. 3. Realizations. (a). H = 0.95, D = 1. (b). H = 0.55, D = 1.

5 Conclusions We have given a computational model to generate traffic with separate parametrization of the self-similarity property and long-range dependence based on the correlation model of the generalized Cauchy processes. Since this correlation model can separately characterize the fractal dimension and the Hurst parameter of a process, the present method can be used to simulate realizations that have the same long-range dependence with different fractal dimensions (i.e., different burstinesses from the view point of networking). On the other hand, we can synthesize realizations that have the same fractal dimension but with different long-range dependencies. Hence it provides a flexible way to simulate realizations of traffic. These are key advantages of the computational model presented.

Acknowledgements This work was supported in part by the National Natural Science Foundation of China under the project grant numbers 60573125 and 60672114, by the Key Laboratory of Security and Secrecy of Information, Beijing Electronic Science and Technology Institute under the project number KYKF 200606 of the open founding. SC Lim would like to thank the Malaysia Ministry of Science, Technology and Innovation for the IRPA Grant 09-99-01-0095 EA093, and Academy of Sciences of Malaysia for the Scientific Advancement Fund Allocation (SAGA) P 96c.

Generating Traffic Time Series Based on Generalized Cauchy Process

381

References 1. Mandelbrot, B. B.: Gaussian Self-Affinity and Fractals. Springer (2001) 2. Beran, J.: Statistics for Long-Memory Processes. Chapman & Hall (1994) 3. Paxson, V., Floyd, S.: Why We Don’t Know How to Simulate the Internet. Proc., Winter Simulation Conf. (1997) 1037-1044 4. Press, W. H., Teukolsky, S. A., Vetterling, W. T., Flannery, B. P.: Numerical Recipes in C: the Art of Scientific Computing. 2nd Edition, Cambridge University Press (1992) 5. Li, M.: Applied Mathematical Modelling 29 (2005) 55-63 6. Livny, M., Melamed, B., Tsiolis, A. K.: The Impact of Autocorrelation on Queuing Systems. Management Science 39 (1993) 322-339 7. Tsybakov, B., Georganas, N. D.: IEEE T. Information Theory 44 (1998) 1713-1725 8. Li, M., Zhao, W., Jia, W., Long, D. Y., Chi, C.-H.: Modeling Autocorrelation Functions of Self-Similar Teletraffic in Communication Networks based on Optimal Approximation in Hilbert Space. Applied Mathematical Modelling 27 (2003) 155-168 9. Paxson, V., Floyd, S.: IEEE/ACM T. Networking 3 (1995) 226-244 10. Paxson, V.: Fast Approximate Synthesis of Fractional Gaussian Noise for Generating SelfSimilar Network Traffic. Computer Communication Review 27 (1997) 5-18 11. Jeong, H.-D. J., Lee, J.-S. R., McNickle, D., Pawlikowski, P.: Simulation. Modelling Practice and Theory 13 (2005) 233–256 12. Ledesma, S., Liu, D.: Computer Communication Review 30 (2000) 4-17 13. Garrett, M. W., Willinger, W.: Analysis, Modeling and Generation of Self-Similar VBR Traffic. Proc., ACM SigComm’94, London (1994) 269-280 14. Li, M., Chi, C.-H.: A Correlation-Based Computational Method for Simulating LongRange Dependent Data. Journal of the Franklin Institute 340 (2003) 503-514 15. Mandelbrot, B. B.: The Fractal Geometry of Nature. W. H. Freeman (1982) 16. Gneiting, T., Schlather, M.: Stochastic Models That Separate Fractal Dimension and Hurst Effect. SIAM Review 46 (2004) 269-282 17. Li, M., Lim, S. C.: Modeling Network Traffic Using Cauchy Correlation Model with Long-Range Dependence. Modern Physics Letters B 19 (2005) 829-840 18. Lim, S. C., Li, M.: Generalized Cauchy Process and Its Application to Relaxation Phenomena. Journal of Physics A: Mathematical and General 39 (12) (2006) 2935-2951 19. Mandelbrot, B. B., van Ness, J. W.: Fractional Brownian Motions, Fractional Noises and Applications. SIAM Review 10 (1968) 422-437 20. Kaplan, L. M., Kuo C.-C. J.: IEEE T. Signal Processing 42 (1994) 3526-3530 21. Chiles, J-P., Delfiner, P.: Geostatistics, Modeling Spatial Uncertainty (Wiley) (1999) 22. Li, M.: Modeling Autocorrelation Functions of Long-Range Dependent Teletraffic Series based on Optimal Approximation in Hilbert Space-a Further Study. Applied Mathematical Modelling 31 (3) (2007) 625-631 23. Kent, J. T., Wood, A. T. A.: J. R. Statit. Soc. B 59 (1997) 579-599 24. Dubuc, B., Quiniou, J. F., Roques-Carmes, C., Tricot, C., Zucker, S. W.: Phys. Rev. A 39 (1989) 1500-1512 25. Hall P., Wood, A.: Biometrika 80 (1993) 246-252 26. Taylor, C. C., Taylor, S. J.: J. Roy. Statist. Soc. Ser. B 53 (1991) 353-364 27. Constantine, A. G., Hall, P.: J. Roy. Statist. Soc. Ser. B 56 (1994) 97-113 28. Taqqu, M. S., Teverovsky, V., Willinger, W.: Fractals 3 (1995) 785-798 29. Peng, C.-K., Buldyrev, S. V., Havlin, S., Simons, M., Stanley, H. E., Goldberger, A. L.: Mosaic Organization of DNA Nucleotides. Phys. Rev. E 49 (1994) 1685-1689 30. Kantelhardt, J. W., et al.: Phys. A 295 (2001) 441-454 31. Li, M., Lim, S. C.: A Rigorous Derivation of Power Spectrum of Fractional Gaussian Noise. Fluctuation and Noise Letters 6 (4) 2006, C33-36

Reliable and Scalable State Management Using Migration of State Information in Web Services* Jongbae Moon, Hyungil Park, and Myungho Kim #313, School of Computing, Soongsil University, Sangdo-5 Dong, Dongjak-Gu, Seoul, 156-743, Korea {jbmoon, hgpark}@ss.ssu.ac.kr, [email protected]

Abstract. The WS-Resource framework (WSRF) was proposed as a reengineering and evolution of OGSI to be compatible with the current Web services conventions and specifications. Although WSRF separates state management from the stateless web service and provides a mechanism for state management, it still has some limitations. The tight-coupling between web services and their resource factories restricts the scalability. When the repository of stateful resources fails, stateful web services can not work. In this paper, we propose a new state management framework which is called State Management Web Service Framework (SMWSF) and implemented on Microsoft .NET framework. SMWSF provides reliability, flexibility, scalability, and security. SMWSF provides migration of state information and the service requestor can control the location of the repository of stateful resource. We also implemented a prototype system and conducted comparison experiments to evaluate performance. Keywords: state management, web service, WSRF, migration.

1 Introduction Web services are “software systems designed to support interoperable machine-tomachine interaction over a network” [1]. Though there has been some success of Web Service in the industry, Web service has been regarded as stateless and non-transient [2]. Recently, most of the web services, however, are stateful; state information is kept and used during the execution of applications. To manage sate information within Web services framework, the Open Grid Services Infrastructure (OGSI) [3] and Web Service Resource Framework (WSRF) [4] were proposed. Both OGSI and WSRF are concerned with how to manipulate stateful resources. OGSI extends the power of Web services framework significantly by integrating support for transient, stateful service instances with existing Web services technologies. The Globus Toolkit 3 (GT3) [5] is an implementation of the OGSI specification and has become a de facto standard for Grid middleware. GT3 uses Grid service factory to create multiple instances of the Grid service, and the Grid service instances are stateful. However, because GT3 uses the same container for grid services, the *

This work was supported by the Soongsil University Research Fund.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 382 – 389, 2007. © Springer-Verlag Berlin Heidelberg 2007

Reliable and Scalable State Management Using Migration of State Information

383

service container, which is a Grid service factory, has to be restarted whenever a new service joins. This will affect all existing services in the same container. WSRF was introduced as a refactoring and evolution of OGSI, and provides a generic, open framework for modeling and accessing stateful resources using Web services. WSRF uses different container to manage stateful resources and Web services; WSRF separates state management from the stateless web services. Therefore, there is no loss of state information and other service instances should continue to work although the execution of a service instance fails. WSRF, however, still has some limitations [2]. Each Web service accompanies a WS-Resource factory. The tight coupling between the web service and the resource factory leads to scalability problem. Moreover, WSRF does not provide the flexibility of choosing the location of the state repository. This may introduce security problems although the service itself is trusted by the requestor; the requestor and provider may have different security policies. Moreover, when the state repository fails, the stateful web service does not work. In this paper, we propose a new state management framework, which is implemented on Microsoft .NET Web services, and implement a prototype system. The prototype system makes Web services and their state management loosely-coupled and Web services can use another state management service, which is in another service provider, to provide scalability. The state information can migrate to another repository to enhance reliability, security, and scalability while Web services are running. The migration of the state information also provides the flexibility of choosing the location of the state repository. To provide security for the state information, the state management stores the state information with an encryption key which is sent by the requestor. Moreover, whenever the state information migrates, it is transferred through a security channel. The rest of this paper is organized as follows. Section 2 summarizes the existing researches regarding the state management in Web services. Section 3 proposes a system model. Section 4 describes how to implement a prototype system. Section 5 evaluates the performance of the proposed system by conducting comparison experiments, and Section 6 concludes the paper.

2 Related Works Web services have a problem that it is difficult to maintain state because web services are built on top of the stateless HTTP protocol. While Web service implementations are typically stateless, their interfaces frequently provide a user with the ability to access and manipulate state. In [6], three different models to keep and manage state information are proposed. However, maintaining state information has restriction on scalability. Moreover, to provide security for the state, extra works are required. OGSI enables access to stateful resources with the maintenance and retrieval of state information. OGSI introduces the resource factory to create multiple instances of the Grid service. Because OGSI uses the same container for Grid services, the service container has to be restarted whenever a new service joins. WSRF provides standardized patterns and interfaces to manage state information. WSRF introduces the WS-Resource factory to create the instance of stateful resources. When a web service fails, WSRF can restore the state information by

384

J. Moon, H. Park, and M. Kim

separating state management from a web service. WSRF, however, has some limitation to manage the state. The tight coupling between Web services and their factories leads to scalability problem. In [2], a generic state management service, which separates state management from Web services, is introduced to overcome WSRF’s limitations. Besides the reduction of work for service developers, scalability is enhanced by the flexible deployment of the state management service. However, once the state information is stored in a stateful resource, the requestor can not change the location of the repository. Therefore, although the Web service and state management service are trusted by the requestor, the repository may not guarantee security problems. Moreover, failure of the repository reduces the reliability and scalability.

3 Proposed State Management Model In this section, we propose a new state management framework which overcomes the WSRF’s limitation as mentioned above. The proposed framework is called the State Management Web Service Framework (SMWSF). SMWSF provides scalability, flexibility, security and reliability. Fig. 1 shows the state management system model based on SMWSF. In this system model, state management, which is a Web service for creating and managing the instance of a stateful resource, is separated from web services. The service requestor can make web services use another state management service that exists in one of the other service providers. The state management service implements common interfaces to store the state information in some types of stateful resource. Therefore, service authors can easily develop web services regardless of the implementation of state management interfaces. Moreover, the state management service provides an interface that a stateful resource can migrate to another repository, as well as in another type. The state information is encrypted or decrypted before the state management service stores or restores it. Moreover, communications between the state management service and the stateful resource is established through a security channel. A stateful resource is referred to as a WS-Resource in WSRF, and each WSResource is described by an XML document. In SMWSF, a stateful resource is implemented in some different types. The service requestor may want to change the location of the stateful resource because of security problems or failure of the repository. The state information can migrate to another repository chosen by the requestor, and the type of the stateful resource can be changed. In addition, migrating the stateful resource enhances reliability when the repository fails or does not work well. During the migration process, to guarantee security for the state information, communication is established through a security protocol, such as IPSec (IP Security). Fig. 1 a) shows the process of using the Web service in SMWSF, and the details are described as follows. Fist of all, a service requestor sends a request including an encryption key to the service provider, choosing a type of the stateful resource and a location where the stateful resource is stored in. Then, the Web service generates an XML document including the state information, and sends a request with the generated XML document to the state management web service. After the state management service encrypts the state information with the requestor’s encryption key, the

Reliable and Scalable State Management Using Migration of State Information

385

encrypted state information is stored in a chosen place. Moreover as well as in a desired type. After that, the state management service returns the URI of the stateful resource to the Web service, and then the Web service returns the URI to the requestor. When the requestor requests the stateful resource to be moved, the Web service takes in the type and URI of the stateful resource from the requestor, and sends a migration request to the state management service. The state management service reads the corresponding resource, and then stores the state information in the specified location as well as specified type. In addition, Fig. 1 b) shows the details that the Web service changes the state management service with another one. The old state management service sends the URI of the stateful resource to the new state. After the new state management service gets in contact with the stateful resource, then the Web service communicate with the new state management service through a security channel.

a) The state information can migrate from repository A to B.

b) The service requestor can choose the state management service. Fig. 1. Proposed State Management System Model

386

J. Moon, H. Park, and M. Kim

4 Implementation We implemented a prototype system to test SMWSF. We use ASP.NET in C# to implement web service applications. Besides, we use Microsoft IIS (Internet Information Server) as a web application server. The prototype system is implemented on top of the Microsoft .NET framework. SMWSF implements basic WSRF specifications, such as WS-ResourceProperties [7] and WS-ResourceLifetime [8]. In the prototype system, a system is divided into three parts which are web services, a state management service including the proposed framework, and stateful resources. The system also provides libraries for implementing web services and lifetime management module for keeping track of the stateful resources created by the client requests. In this system, every web service should be developed and implemented by importing libraries which is provided by SMWSF. Web services are ASP.NET web services. Service authors annotate their web service codes with metadata via .NET attributes. The port type attribute allows service authors to easily import the functionality that is defined by web service library into their service. The web service libraries include state management interfaces, an XML generator, and a GUID generator. Web services use the state management interfaces, which are just defined but not implemented, to communicate with the state management service. The sate management web service is in charge of the implementation of the interfaces. Therefore, web services use another state management service which is provided by another service provider. Web services generate service requestor’s GUID (Global Unique ID) by using the GUID generator. The GUID, which is a 128bit integer number generated by using hash function, is used to grant access to a stateful resource. Web services generate an XML document to manipulate state information as a stateful resource by using the XML generator. In a web service, class-level data members are declared as part of the stateful resource via the [Resource] attribute to generate an XML document. The values of the class-level data members are saved into an XML document. The generated XML document includes a GUID and an encryption key in the element. The class-level data member’s name and value that are described in the [Resource] attribute in the web service are set in the element. This XML document is encrypted with the encryption key, and then stored in the repository. The state management service manages web services’ state information. This service implemented as a web service. Service authors can make web services communicate with another state management web service provided by another service provider. The state management web service implements interfaces for state management, such as store, restore, deletion, and migration of stateful resources. To do this, the port type attribute is also used to import functionality that is defined in SMWSF. In the case of the migration, the state management web service generates an XML document from the repository first by using one of the stateful resource repository management modules. The XML document is stored in a desired place, as well as in a desired type. When the service requestor wants web services to use another state management service, the XML document is sent to the selected state management service, and then stored in the repository. A stateful resource must persist between service invocations until the end of its lifetime, meaning that the state of a stateful resource after one invocation should be

Reliable and Scalable State Management Using Migration of State Information

387

the same as its state before the next. Persistence can be achieved by holding the resource in memory, writing it to disk, or storing it in a database. The memory model provides the best response-time performance but least fault-tolerant. The file system model provides slow performance than other models, but provides the ability to survive server failure at the cost of some performance. The database model is slower than memory model, but provides scalability, fault-tolerance, and access to powerful query/discover mechanisms that are not present in the file system model. Therefore, in the proposed system the state resource repository management implements the stateful resource in these three types.

5 Performance Evaluation In this section, we conducted two experiments for comparison to evaluate performance of SMWSF; we compared the system based on SMWSF with other systems implementing WSRF specification: GT4 and WSRF.NET. First, we implemented a calculator web service, and estimated the response time for creating, deleting, storing, restoring, and migrating state information of a web service. We performed each operation 1000 times, and then measured the average of the response time. The calculator service is a web service providing computation functionalities. Second, we implemented an airline booking web service on each framework. Then, we compared the service time, which is measured by the time in seconds for the client to receive the full response, as the number of clients increase. The airline booking service is a web service that retrieves airline schedules and then books seats. To perform this experiment automatically, we made the airline booking service book the first retrieved flight and reserve a fixed seat on the plane. We used four identically configured machines that have an Intel Pentium 4 3.0GHz CPU with 1GB Ram and 80GB E-IDE 7200 RPM HDD. Two machines for SMWSF and one for WSRF.NET ran Windows 2003 Server Standard Edition. One for GT4 ran Linux Fedora Core 4 (Linux kernel 2.6.11). In SMWSF, stateful resources were implemented in three types, which were XML file, database, and memory; MySQL was used as a database for this experiment. GT4 stores state information in system memory. WSRF.NET implements WS-Resources using SQL Server 2000. Table 1 shows the average response time for basic operations of the calculator service. All the numbers are in milliseconds. The Remote SMWSF is the case when the calculator service uses a remote state management service provided by another service provider. In this case, the average response time was slower than SMWSF and other systems because an additional communication cost between a web services and its state management service is needed. As compared with GT4 implementing stateful resource in memory model, SMWSF was faster in every test because GT4 is implemented in Java. As compared with WSRF.NET implementing stateful resource in database model, SMWSF had similar performance although there were additional overheads during encryption and decryption. Therefore, we could see SMWSF has as good performance as WSRF.NET has.

388

J. Moon, H. Park, and M. Kim Table 1. Average response time for basic operations

GT4-Java WSRF.NET File System SMWSF Memory Database File System Remote Memory SMWSF Database

Create 16.3 ms 14.7 ms 15.3 ms 13.1 ms 14.2 ms 21.5 ms 19.4 ms 20.8 ms

Delete 23.6 ms 21.4 ms 23.5 ms 20.1 ms 21.8 ms 34.4 ms 30.8 ms 32.9 ms

Restore 28.6 ms 38.2 ms 32.8 ms 27.9 ms 37.5 ms 44.1 ms 37.4 ms 47.4 ms

Store 24.9 ms 24.4 ms 22.3 ms 19.5 ms 24.0 ms 35.2 ms 30.4 ms 36.4 ms

Fig. 2 shows the service time of the airline booking web service as the number of clients increase from 50 to 400. In this experiment, the Remote SMWSF is considered only if memory model. As the number of clients increase, the service time of GT4 was the fastest, followed by SMWSF, Remote SMWSF, and WSRF.NET. As might have been expected, the systems using memory model was faster than the systems using database and file system model. Moreover, Remote SMWSF was slower than SMWSF and GT4 because of additional communication cost and encryption overhead. GT4 had stable performance even though the number of clients increased because GT4 implements HTTP connection caching which reuses HTTP connection that was previously created. In the first experiment, the response time of SMWSF and WSRF.NET were comparable. In this experiment, as compared with WSRF.NET, SMWSF was faster because of the overhead caused by web service extension; WSRF.NET uses Microsoft Web Services Enhancement to provide SOAP message exchange. In addition, there were more SOAP data than SMWSF.

Fig. 2. Service time for an airline booking service according to the number of clients

Reliable and Scalable State Management Using Migration of State Information

389

6 Conclusions and Future Works In this paper, we proposed a new state management framework which provides scalability, flexibility, security, and reliability. This framework is called State Management Web Service Framework (SMWSF). We also implemented a prototype system that is based on the Microsoft .NET framework. In the prototype system, the state management is separated from web services. The loosely-coupled between the web service and the state management provides scalability. The flexibility is provided by making service requestors choose another state management service among the other service providers. The state information can migrate to another repository, where the type of stateful resource can be changed from one type to another. The migration of the state information enhances reliability and security when the repository fails or does not work well. Many issues still remain to be addressed. Because of the loosely-coupled between the web service and the state management, some communication overhead is occurred. Moreover, the communication between sate management service and the sateful resource is done through a security channel. We need to study for reducing this additional communication overhead. We need to implement many other components for the framework, especially WS-Notification specifications. In addition, more experiments must be conducted on fault-tolerance to evaluate performance the proposed system.

References 1. David, Booth, Hugo, Haas, Francis, McCabe, Eric, Newcomer, Michael, Champion, Chris, Ferris, David, Orchard: Web Services Architecture – W3C Working Draft 8 August 2003. http://www.w3.org/TR/2003/WD-ws-arch-20030808/ 2. Y., Xie, Y.M., Teo: State Management Issues and Grid Services. International Conference on Grid and Cooperative Computing. LNCS, Vol. 3251 (2004) 17-25 3. S., Tuecke, K., Czajkowski, I., Foster, J., Frey, S., Graham, C., Kesselman, P., Vanderbilt: Open Grid Service Infrastructure (OGSI). (2002) 4. Czajkowski, K., Ferguson, D., Foster, I., Frey, J., Graham, S., Sedukhin, I., Snelling, D., Tuecke, S., Vambenepe, W.: The WS-Resource Framework. http://www.globus.org/wsrf/ (2004) 5. Globus Toolkit version 3. http://www.globus.org/ 6. Xiang, Song, Namgeun, Jeong, Phillip, W., Hutto, Umakishore, Ramachandran, James, M., Rehg: State Management in Web Services. IEEE International Workshop on FTDCS’04 (2004) 7. Graham, S., Czajkwski, K., Ferguson, D., Foster, I., Frey, J., Leymann, F., Maguire, T., Nagaratnam, N., Nally, M., Storey, T., Sedukhin, I., Snelling, D., Tuecke, S., Vambenepe, W., Weerawarana, S.: WS-ResourceProperties. http://www-106.ibm.com/developerworks/ library/ws-resource/ws-resourceproperties.pdf (2004) 8. Frey, J., Graham, S., Czajkowski, C., Ferguson, D., Foster, I., Leymann, F., Maguire, T., Nagaratnam, N., Nally, M., Storey, T., Sedukhin, I., Snelling, D., Tuecke, S., Vambenepe, W., Weerawarana, S.: WS-ResourceLifetime. http://www-106.ibm.com/developerworks/ library/ws-resource/ws-resourcelifetime.pdf (2004)

Efficient and Reliable Execution of Legacy Codes Exposed as Services Bartosz Bali´s1 , Marian Bubak1,2 , Kamil Sterna1 , and Adam Bemben1 1 2

Institute of Computer Science, AGH, al. Mickiewicza 30, 30-059 Krak´ ow, Poland Academic Computer Centre – CYFRONET, Nawojki 11, 30-950 Krak´ ow, Poland {bubak,balis}@agh.edu.pl Phone: (+48 12) 617 39 64; Fax: (+48 12) 633 80 54

Abstract. In this paper, we propose a framework that enables fault tolerance and dynamic load balancing for legacy codes running as backends of services. The framework architecture is divided into two layers. The upper layer contains the service interfaces and additional management services, while the legacy backends run in the lower layer. The management layer can record the invocation history or save state of a legacy worker job that runs in the lower layer. Based on this, computing can be migrated to one of a pool of legacy worker jobs. Fault-tolerance in the upper layer is also handled by means of active replication. We argue that the combination of these two methods provides a comprehensive support for efficient and reliable execution of legacy codes. After presenting the architecture and basic scenarios for fault tolerance and load balancing, we conclude with performance evaluation of our framework. Keywords: Legacy code, fault tolerance, load balancing, migration.

1

Introduction

Recently developed systems for conducting e-Science experiments evolve into Service-Oriented Architectures (SOA) that support a model of computation based on composition of services into workflows [4]. Legacy codes nevertheless, instead of being rewritten or reengineered, are often adapted to new architectures through exposing them as services or components, and remain the computational core of the application. Static and dynamic load balancing (LB) as well as fault tolerance (FT) are highly desired features of a modern execution environment, necessary to sustain the quality of service, high reliability and efficiency. The workflow model, in which the application logic is separated from the application itself, is for this very fact well-suited for LB and FT support, because the execution progress of a workflow is handled by generic services, the enactment engines. However, the presence of legacy jobs running in the backends of services complicates this support, as the legacy jobs are often not directly modeled in the workflow and thus not handled by the same generic enactment engines. While the subjects of FT and LB of parallel and distributed systems [2] are very well recognized for web services [10] [6] or components [8], similar problems for legacy codes running in service backends are still not well addressed. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 390–397, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Efficient and Reliable Execution of Legacy Codes Exposed as Services

391

This paper presents a solution to support FT and LB for legacy codes exposed as services. We propose a generic framework which enables seamless and transparent checkpointing, migration, and dynamic load balancing of legacy jobs running as backends of services. The proposed framework is based on our previous work, a system for virtualization of legacy codes as grid services, LGF (Legacy to Grid Framework) [1] [11]. Unlike our previous work which focused solely on exposing of legacy codes as services, this paper focuses on the aspects of reliable and efficient execution of legacy systems. We propose an architecture for the framework and justify our design choices. We present a prototype implementation of the framework and perform a feasibility study in order to verify whether our solution fulfills functional (FT and LB scenarios) and non-functional (performance overhead evaluation) requirements. The remainder of this paper is as follows. Section 2 presents related work. Sections 3 and 4 describe the architecture and operation of our FT-LB framework, respectively. We conclude the paper in Section 5 which studies the impact of FT and LB mechanisms on the overall application performance.

2

State of the Art

Most existing approaches to adapting legacy codes to modern environments focus on command-line applications and follow a simple adapter pattern in which a web service or a component (e.g. CORBA [5]) wrapper is automatically generated for a legacy command line application, based on a specification of its input parameters [9]. Those approaches differ in terms of addressing other aspects of legacy system’s adaptation such as security [7], automatic deployment or integration with a framework for workflow composition. In some cases even some brokering mechanisms are taken into account [3]. Of the available tools, relatively most comprehensive solution is presented by a tool CAWOM [12], wherein one can actually specify the format of legacy system’s responses (using a formal language) which allows for more complex interactions with a legacy system, including synchronous and asynchronous calls. Overall the mentioned tools, whether they offer simple wrapping, or more advanced frameworks with brokering, automatic deployment and workflow composition capabilities, neither take into account nor are designed to support fault tolerance and dynamic load balancing of legacy systems. Our framework is designed to fill this gap. The separation into two layers, and operation of legacy jobs in a client instead of server fashion, solves many issues and enables flexible solutions of LB and FT problems. We describe those in the following sections of this paper.

3

LB-FT Framework Architecture

The architecture of our framework, presented in Fig. 1, is comprised of three main components, namely: Service Client, Service Manager which exposes interfaces, and Backend System on which the legacy code is actually executed.

392

B. Bali´s et al.

Service Client

create resource Service Manager

invoke Resource Resource

Factory Internal Interface

Information Service

forward request

External Interface

Resource Manager

discover

fetch request

Backend System submit register

Worker Job

Fig. 1. Architecture of FT & LB Framework for Legacy Code

The heart of the system is a Service Manager which exposes the legacy code as a service (External Interface) to be invoked by a Service Client. The central concept that enables transparent migration of computation, being a prerequisite for FT and LB, is decoupling of service interface layer from actual legacy code, the latter being deployed as a job (Worker Job), on a (usually remote) machine (Backend System). An important design detail is that the legacy Worker Job acts as a client to the Service Manager and fetches Service Client requests from Internal Interface in the Service Manager. An alternative option would be to use notifications. However, in such a case Worker Jobs would have to act as servers which would make their deployment and migration much more difficult. The remaining component in the Service Manager is a Resource Manager whose main responsibility is to submit Worker Jobs to Backend Systems. In addition, we support stateful conversation with legacy software through WSRF-like services. To this end, a client can create a Resource using a Factory service. In this way, the framework enables stateful, session-based interaction with legacy services. In the architecture, an external Information System is presented to which Backend Systems register while the Resource Manager acts as a resource broker deciding which Backend System to submit a new legacy Worker Job to, based on a list of available Backend Systems and corresponding monitoring information (such as current load). Alternatively, the Information Service could be replaced by an external Resource Broker to which all resource brokerage decisions would be delegated by the Resource Manager. Thanks to decoupling of service interfaces and legacy back ends, multiple legacy worker jobs can be connected to a single service as a resource pool. The computation can be easily migrated from one back end to another in case of performance drop (load balancing) or failure (fault tolerance). The framework supports both history-based and checkpoint-based migration of stateful jobs. For

Efficient and Reliable Execution of Legacy Codes Exposed as Services

393

the former case, the Service Manager records the invocation history of a given client and in case of migration to another worker job, the invocations can be repeated. The latter case is supported by allowing a legacy job to periodically send its state snapshot to the Service Manager; in the event of migration, the new worker job can restore the last state. The architecture enables both lowlevel and high-level checkpointing to create state snapshots, though the current implementation supports only the high-level one in which the worker jobs have to provide an implementation of code to create and restore snapshots. In our framework, the service interface layer is thin and performs no processing, merely forwarding requests plus other management functions related to migration, state restoration, etc. However, this layer is also subject to failure, e.g. due to software aging of the underlying application containers. Consequently, we also take into account the fault-tolerance of this layer using the technique of active replication. Multiple Service Managers can be assigned to a single interaction between a client and a legacy back end. The client submits all requests to all Service Managers. Similarly, the backend worker job fetches requests from all Service Managers. In consequence, requests are received by the worker job multiple times, however, they are executed only once.

4

Fault Tolerance and Load Balancing Scenarios

Fault tolerance and load balancing scenarios for the backend side differ only in the way the migration is initiated. Both scenarios are shown in Fig. 2 (a) and (b) respectively.

(a)

(b)

Fig. 2. Scenarios involving migration: (a) fault tolerance, (b) load balancing

The resource manager fetches a client request (not shown) and assigns a proper worker (assign client). The worker periodically signals the availability (heartbeat), retrieves the client request (fetch request) and stores checkpoints (store state). A migration is triggered either by a worker crash (fault tolerance) or a persistant node overload (load balancing). In the latter case, the worker is

394

B. Bali´s et al.

explicitly destroyed (destroy). In either case, the Service Manager assigns another worker (assign client), which restores the latest state (restore state) and the operation is carried on as it was. For fault tolerance and software rejuvenation purposes, we employ the active replication mechanism at the upper layer. It is based on multiplying every operation on two (or more) service managers. An appropriate scheme is depicted in Fig. 3. The client calls two service managers at once (1,2). Similarly, the backend worker job fetches requests from both service managers (3, 5), however, only one request is actually executed (4). The result is returned to both service managers (6,7) and they forward it back to the client (8, 9). When one of service manager crashes (10), the operation is carried on with one service manager only (11-13).

Fig. 3. Sequence of system operations for active replication model

5

Performance Evaluation

This section presents an evaluation of the impact of the framework on the overall application’s performance. The framework prototype was developed in Java and it generates WSRF services based on Globus Toolkit 4. The following tests were conducted: (1) the impact of the interposition management layer on latency and throughput of a service as compared to direct invocation, (2) the latency of migration in a fault tolerance and load balancing scenario, and (3) the cost of active replication in terms of application’s completion time. For the evaluation, we used a simple algorithm computing a requested number in the Fibonacci sequence, exposed as a service. In total, four IA-32 machines with 1 GB RAM and 1.2 GHz CPU, running Linux Debian, Java 1.4 and Globus 3.2.1 were used. Fig. 4 (a) shows the overhead of the interposition layer. We compared the performance of two functionally equivalent web services. One of them used a legacy library while the other was standalone. Both web services exposed a single method that calculated the length of a byte array passed as a parameter. Latency and bandwidth of both services were obtained based on the formula: time(length) = length/bandwith + latency

(1)

Efficient and Reliable Execution of Legacy Codes Exposed as Services

395

Using least squares linear regression fitting, we have obtained figures for both services. As a result, we observed that while latency was increased 2.4 times, bandwidth was reduced only by 12%. In the fault tolerance scenario, the Service Manager loses connection with one of the workers (at the moment of starting the service on a backend system). After the timeout, the backend system is considered to have undergone abnormal termination. Process migration is initiated, and the method invocations are delegated to another node. The result is shown in Fig. 4 (b).

(a)

(b)

Fig. 4. (a) Interposition layer overhead (b) Migration overhead

We used history-based fault tolerance whose cost can be estimated at approximately 1-2 seconds. An additional cost is also connected with the number of lost heartbeats before a node is considered to be undergoing a failure. The load balancing scenario is different only in terms of the cause which triggers the migration, which in this case is a node overload over a certain number of heartbeats. The Service Manager decides to migrate the resources to a node exposing better performance. The overhead proved to be quite similar and is not shown here separately. Finally, we study the active replication scenario. The framework runs with two service managers present, running in separate containers and performing identical actions. One Service Manager crashes. The platform continues normal work; however the invocations are handled by one service manager only. The overhead in this scenario is caused by additional processing: on the client side which performs more invocations and has to filter out redundant results, and the backend side where redundant requests have to be discarded and the results has to be returned to more than one Service Manager. Fig. 5 shows the processing time from a worker perspective, for a single, and for two service managers. The times for those two cases are practically the same which proves that the additional processing time – indeed, only limited to more invocations and discarding redundant operations – does not induce substantial overhead.

396

B. Bali´s et al.

Fig. 5. Active replication overhead

At the same time we observe undisturbed operation of the system despite the crash of one Service Manager.

6

Conclusion

We have presented a framework for enabling fault tolerance and dynamic load balancing for legacy codes running in the backend of web services, typically as parts of e-Science workflows. We proposed a two layer architecture in which a management layer containing service interfaces is decoupled from a computational core which is deployed in separate worker jobs. We use different, complementing strategies for FT & LB in the two system layers. In the backend layer, we use an efficient method based on a pool of worker jobs of a certain type among which the computing can be switched when there is a need to do so. Recovering state from snapshots or repeating of invocation history can be used in the case of migration of stateful services. In the front-end layer, we use the active replication of service interfaces. Though this method is expensive, it does not require further management such as state snapshots or heartbeat monitoring. Our investigation revealed that thanks to the very architecture of our framework in which the service layer is thin and contains no processing, the inherent overhead of active replication is compensated and is perfectly affordable. Overall, the performed feasibility study shows the framework fulfills the functional and performance requirements and constitutes a comprehensive solution for reliable and efficient running of workflows that contain legacy code in the computational back ends, which is important for development of e-Science applications. The future work encompasses, most importantly, the expansion of our system into data-centric workflows involving streaming between legacy jobs, and support for load balancing and fault tolerance scenarios for complex legacy systems, such as parallel MPI applications. Other aspects include integration with a security infrastructure.

Efficient and Reliable Execution of Legacy Codes Exposed as Services

397

Acknowledgements. This work is partly supported by EU-IST Project CoreGrid IST-2002-004265, and EU-IST Project Virolab IST-027446.

References 1. Balis, B., Bubak, M., Wegiel, M.: A Solution for Adapting Legacy Code as Web Services. In Proc. Workshop on Component Models and Systems for Grid Applications. 18th Annual ACM International Conference on Supercomputing, Saint-Malo, France, Kluwer (July 2004) 2. Cao, J., Spooner, D. P., Jarvis, S. A., Nudd, G. R.: Grid Load Balancing Using Intelligent Agents. Future Generation Computer Systems special issue on Intelligent Grid Environments: Principles and Applications, 21(1) (2005) 135-149 3. Delaittre, T., Kiss, T., Goyeneche, A., Terstyanszky, G., Winter, S., Kacsuk, P.: GEMLCA: Running Legacy Code Applications as Grid Services. Journal of Grid Computing Vol. 3. No. 1-2. Springer Science + Business Media B.V. (June 2005) 75-90 4. E-Science 2006, Homepage: http://www.escience-meeting.org/eScience2006/ 5. Gannod, G. C., Mudiam, S. V., Lindquist, T. E.: An Architecture Based Approach for Synthesizing and Integrating Adapters for Legacy Software. Proc. 7th Working Conference on Reverse Engineering. IEEE (November 2000) 128-137 6. Hwang, S., Kesselman, C.: A Flexible Framework for Fault Tolerance in the Grid. Journal of Grid Computing 1(3)(2003) 251-272 7. Kandaswamy, G., Fang, L., Huang, Y., Shirasuna, S., Marru, S., Gannon, D.: Building Web Services for Scientific Grid Applications. IBM Journal of Research and Development, Vol. 50. No. 2/3 (March/May 2006) 249-260, . 8. Moser, L., Melliar-Smith, P., Narasimhan, P.: A Fault Tolerance Framework for CORBA. International Symposium on Fault Tolerant Computing (Madison, WI) (June 1999) 150-157 9. Pierce, M., Fox, G.: Making Scientific Applications as Web Services. Web Computing (January/Februray 2004) 10. Tartanoglu, F., Issarny, V., Romanovsky, A., Levy, N.: Dependability in the Web Services Architecture. Architecting Dependable Systems. LNCS 2677 (June 2003) 11. Wegiel, M., Bubak, M., Balis, B.: Fine-Grain Interoperability with Legacy Libraries Virtualized as Web Services. Proc. Grid-Enabling Legacy Applications and Supporting End Users Workshop within the framework 15th IEEE HPDC 15, Paris, France ( June 2006) 12. Wohlstadter, E., Jackson, S., Devanbu, P.: Generating Wrappers for Command Line Programs: The cal-aggie wrap-o-matic Project. Proc. 23rd International Conference on Software Engineering (ICSE 2001). ACM (2001) 243-252

Provenance Provisioning in Mobile Agent-Based Distributed Job Workflow Execution Yuhong Feng and Wentong Cai School of Computer Engineering, Nanyang Technological University Singapore 639798 {YHFeng, ASWTCai}@ntu.edu.sg

Abstract. Job workflow systems automate the execution of scientific applications, however they may hide how the results are achieved (i.e., the provenance information of the job workflow execution). This paper describes the development and evaluation of a decentralized recording and collection scheme for job workflow provenance in mobile agent-based distributed job workflow execution. A performance study was conducted to evaluate our approach against the one using a centralized provenance server. The results are discussed in the paper. Keywords: Distributed Job Workflow Execution, Provenance Recording and Collection, Grid Computing.

1

Introduction

The provenance of some data is defined as the documentation of the process that led to the data [1]. The necessity of provenance for job workflow execution is apparent since provenance provides a traceable path on how a job workflow was executed and how the resulted data were derived. It is particularly important in Service Oriented Architecture (SOA) since shared services and data sets might be used in the course of the job workflow execution. Provenance information can be processed and used for various purposes, for example, for validation of e-Science experiments [2], credibility analysis of the results of workflow execution [3], faulttolerance for service-based applications [4], and data sets regeneration for data intensive scientific applications [5]. The provenance information can be generated from the static information available in the original workflow specification (e.g., data dependencies) together with the runtime details obtained by tracing the execution of the workflow execution. The trace can be automatically generated by developing either a special “wrapping service” of the engine [6] or an “engine plugin” [7] to capture and record provenance related data directly from the workflow engine. The workflow trace can also be collected collectively by the services that execute the subjobs [8] or the services together with the workflow engine [9]. But, this puts the responsibility of provenance data recording to the service providers and may also require service modification. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 398–405, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Provenance Provisioning

399

No matter how the traces are collected, in general some special provenance services are used in the current systems to store the provenance data and to provide an interface for users to query the data. Thus, a protocol is needed for various service providers and the workflow engine to communicate with the provenance services during the provenance collection process [1]. A taxonomy of data provenance techniques can be found in [10], and a comprehensive documentation on provenance architecture can be found in [11]. Data intensive scientific applications often involve high volume, and distributed data sets. They can be generally expressed as a workflow of a number of analysis modules, each of which acts on specific sets of data and performs cross multidisplinary computations. To reduce the communication overhead caused by data movement and to provide decentralized control of execution during workflow enactment, the Mobile Code Collaboration Framework (MCCF) is developed to map the execution of subjobs to the distributed resources and to coordinate the subjobs’ execution on runtime according to the abstract workflow provided by users [12]. LMA [13] is used in the MCCF for the purpose of separating functional description and executable code. The functional description of LMA is described using Agent Core (AC). An AC is essentially an XML file, containing the job workflow specification and other necessary information for agent creation and execution. An AC can be migrated from one resource to another. As for the executable code, to separate subjob specific code and common non-functional code (i.e., code for handling resource selection, subjob execution, agent communication, and AC migration), Code-on-Demand (CoD) [14] is used in the MCCF, that is, subjob specific code is downloaded to the computational resource and executed on demand. This enables an analysis module in the data intensive scientific applications to be executed at a computational resource close to where the required data set is. The execution of common non-functional code is carried out by a group of underlying AC agents (or agents in short). The MCCF, which does not have a centralized engine, is different from the existing scientific workflow engines (e.g., Condor’s DAGMan1 , and SCIRun2 ). Hence, the objective of this paper is to develop a provenance recording and collection algorithm so that mobile agents deployed in the execution of a job workflow can collectively collect a complete set of information about the job workflow execution.

2

Partner Identification in the MCCF

Job workflows in MCCF are modeled using Directed Acyclic Graph (DAG). A DAG can be denoted as G = (J , E), where J is the set of vertices representing subjobs, i.e., J ={J0 , J1 , ..., Jn−1 }. E is the set of directed edges between subjobs. There is a directed edge from subjob Ji to Jj if Jj requires Ji ’s execution results as input. Data dependency (denoted as “, is partitioned into groups: {0, 2, 5, 8, 9}, {7}, {1, 4, 6} and {3}, as illustrated in Figure 1(d). (v) Label edges: The edges between two subjobs, Ji and Jj , in the m same group are labeled “m” (denoted as Ji → Jj ), the edges between subjobs c of different groups are labeled “c” (denoted as Ji → Jj ), and the edges in the d

original DAG but not in the spanning tree are labeled “d” (denoted as Ji → Jj ), as illustrated in Figure 1(d). The preprocessed information is included in the AC and used during the dynamic job workflow execution. Suppose the current subjob is Ji , for one of its m outgoing edges, (Ji , Jj ): (i) If Ji → Jj , the AC agents will select the resources for executing Jj . The AC replica will be migrated for Jj ’s execution after Ji c completes its execution. (ii) If Ji → Jj , similar to the last case, the AC agents will select the resources for executing Jj , but a new AC replica will be created 3

We assume that the depth-first search and pre-order traversal algorithms visit child nodes in the order from right to left.

Provenance Provisioning

401

d

for Jj ’s execution. (iii) If Ji → Jj , the AC agents need to communicate so that Jj can locate the location of Ji ’s execution result. The MCCF uses a contact list based agent communication mechanism [16] for subjob result notification. Two agents communicating with each other are called communication partners (or partners in short). Each AC maintains a list of partners with their locations (that is, a contact list). Before an AC is migrated or discarded, its agents will notify its partners so that they can update their contact lists accordingly. The location of the subjob’s execution result will also be notified to the partners. The partner identification is carried out dynamically during the job workflow execution. Assume Ed is a set of edges that are marked with “d” in G = (J , E). Also assume that in the spanning tree generated by the preprocessing algorithm described above, a sub-tree rooted at Ji is denoted as TJi . Suppose Ji and Jj are two subjobs that are currently under execution, if ∃(Ji , Jj  ) ∈ Ed , and Ji ∈ TJi ∧ Jj  ∈ TJj , then the AC agents executing the subjobs Ji and Jj are partners4 .

3

Provenance Recording and Collection Protocol

Let G(Ji ) be the group id of subjob Ji . It is easy to prove that the grouping algorithm described in the last section has the following property: In G, ∀ Ji ∈ J and G(Ji ) > 0, there exists a path p from Ji to Jn−1 , where for any two consecutive subjobs, e.g., Jkq and Jkq+1 , on the path p, we have G(Jkq ) ≥ G(Jkq+1 ). p is called a propagation path. It is obvious that the AC that is finally returned to the user (that is, the original AC created by the user) will contain a complete provenance information for the job workflow if partners with higher group id propagate provenance information to the partners with the lower group ids during the job workflow execution. This forms the basis for the development of the provenance recording and collection protocol. Let R(Ji ) denote a subset of Ji ’s successors, where for any subjob Jj ∈ R(Ji ), m c we have either (Ji → Jj ) or (Ji → Jj ). Assuming that subjob Ji is under execution, the main steps of the protocol are: – On receiving a communication message from its partner, Ji updates its AC to include the provenance information and updates its contact list accordingly. – On completion of Ji ’s execution, Ji ’s corresponding AC agents will record the location of Ji ’s execution result into Ji ’s AC. – If R(Ji ) = ∅, as stated in Section 2, for each subjob Jj , Jj ∈ R(Ji ), AC agents corresponding to Ji will locate resources, that is, the computational resource, the input data sets from the distributed data repository, and the code from the code repository, for the execution of Jj . These information 4

More detail about the preprocessing based dynamic partner identification algorithm can be found in [15].

402

Y. Feng and W. Cai

will be recorded in Ji ’s AC. Then, if |R(Ji )| > 1, |R(Ji )| − 1 replicas of Ji ’s c AC will be created, one for each subjob Jj , Jj ∈ R(Ji ) and (Ji → Jj ). – Before Ji ’s corresponding AC is migrated (or discarded if R(Ji ) = ∅), Ji ’s AC agents will send a communication message to all its partners in the contact list for execution coordination and contact list updating. The message contains the location of Ji ’s execution result, and the scheduling information for each Jj , Jj ∈ R(Ji ). The scheduling information for subjob Jj includes: subjob id, the id of the AC replica to be used to execute the subjob, locations of the selected computational resource, input data sets, and code for the execution of the subjob. In addition, if the partner has a smaller group id, provenance information received by Ji from its partners with a larger group id (recorded in Ji ’s AC replica) during Ji ’s execution is also piggyback on the message. Using the above protocol, the provenance information will be recorded in the AC replicas and propagated along propagation paths during the distributed execution of job workflow. Eventually, the AC that is finally returned to the user will contain the complete provenance information about the job workflow execution.

4

Performance Evaluation

As explained in the last section, the provenance information is transmitted along with the messages for execution coordination and contact list updating. Although there is no additional message required, the size of message will be increased. There is no centralized server used during the provenance information recording and collection. Execution provenance information can also be collected using a centralized provenance server which maintains a provenance repository. For each subjob executed, the AC agents need to notify the centralized server about the provenance information. After a job workflow completes its execution, users can then get the provenance information from the server. In this centralized approach, additional messages are required for AC agents to communicate with the provenance server. Assuming that there is no need to collect provenance information for the start and end nodes (since they are assumed to have zero computation cost), the traffic generated in the centralized model can be estimated by: (n − 2) ∗ msg

(1)

where msg denotes the average size of a provenance message carrying a single provenance record. To evaluate the performance of our distributed provenance collection algorithm, randomly generated Task Graphs (TG), that is, job workflows, were executed in the prototype MCCF system [12] on a cluster of computers. Six pseudo-random TGs were generated using TGFF5 . As stated in Section 2, ∀ Ji ∈ 5

http://ziyang.ece.northwestern.edu/tgff/

Provenance Provisioning 0 1 3

5 9

23

8

4 7

3

22 21 20 19

18

28 29 33 32 31 30 27 26 25 24

6

5 8 10 14

34 TG0

11

7

15 TG1

15

9 13

12

7 14

6 13

11

3

2

10 12

9

3

6

8

5

10

3 8

9

17

16

20

19

22

25

7 13

15

24

29

16 TG2

0

1

4

4

5

6

14 13 11 17 16

0

2

1

2 4

10 15 12

0 1

0

0 1

2

403

14

4 12

11

7 9

18

21 28 30

5 6 8

11 23

27

26

10

13

12

2 4

3

5

7

10

8

6 9 11

12 13

14 16

31 14

32 TG3

1 2

TG4

15

17 TG5

Fig. 2. Random Task Graphs

J , 0 < i < (n − 1), J0 < Ji < Jn−1 . Thus, when a TG has multiple subjobs that have no offspring, a hypothetical subjob is added, with no computation cost, to serve as the common immediate successor of such subjobs. The generated TGs are illustrated in Figure 2, where the dotted filled circles represent the added hypothetical subjobs, and the dotted edges represent the added edges. The number of messages for execution coordination and contact list updating is illustrated in Table 1. These communication messages are required no matter whether decentralized or centralized method is used. During a job workflow execution, the messages that carry the propagated provenance records and the number of provenance records carried by such message were tracked. For a fair comparison with the centralized method, assuming that each propagated provenance record is carried by a separate provenance message, the total number of such messages in the decentralized method would be the summation of the number of propagated provenance records contained in all the tracked messages6 . Each job workflow was executed 3 times, and the average numbers are shown in Table 1. Note that for the centralized method, formula (1) is used to calculate the number of messages generated for the provenance collection. Table 1. Experiment Results Task Graph Msgs for Execution Msgs for Provenance % Improvement Coordination Decentralized Centralized TG0 38 8 33 76% TG1 5 0 14 100% TG2 14 0 15 100% TG3 44 11 31 65% TG4 12 0 13 100% TG5 21 7 16 56%

Table 1 also shows the percentage improvement of the decentralized method over the centralized one. From these results, we observe that the centralized model always generates higher number of messages for provenance information recording. Particularly, by recording the provenance information in the workflow 6

Note that in general the bandwidth consumed by two or more messages sent separately is larger than that of sending them in a single bundle [17].

404

Y. Feng and W. Cai

specification in the AC, provenance record does not always needs to be propagated during a job workflows’ execution. For example, no additional provenance record is propagated during the execution of task graphs TG1, TG2, and TG4 (and thus there is no provenance message required). In the execution of these task graphs, the agents of the AC replicas created during the runtime are the partners of the agents of the original AC. Therefore, propagation of provenance information is not required. In this case, the percentage improvement of the decentralized model over the centralized model is 100%.

5

Conclusion and Future Work

Mobile agent-based distributed job workflow execution hides scientists from the Grid details, but also hides how the result is achieved (that is, the provenance of the job workflow execution). Since data processing in scientific computing may require some level of validation and verification, the information on the services and data sets used during the workflow execution is required. The provenance in many existing scientific workflow engines relies on a centralized provenance collection server for provenance recording and collection. However, in mobile agent-based distributed job workflow execution, there is no centralized workflow engine and thus naturally the provenance recording and collection should also be carried out in a distributed manner. By studying the agent communication in the MCCF and the properties of the preprocessing algorithm for partner identification, a distributed provenance recording and collection protocol has been developed. The subjob provenance information is transmitted along the provenance propagation paths. Since provenance information is piggyback with the messages for execution coordination and contact list updating, there is no additional message required. To evaluate our approach, experimental study has been carried out on randomly generated job workflows. The results show that our approach has less communication overhead than the one using a centralized provenance server for provenance information recording and collection. In the current algorithm, a subjob’s provenance information might be propagated along multiple propagation paths. As a future work, the shortest and unique propagation path for a given subjob will be identified. This will further reduce the communication cost caused by propagation of provenance information. Although currently the execution coordination in the MCCF uses the contact list based mechanism, the provenance recording and collection protocol proposed in this paper should also work for other message-passing based execution coordination mechanisms (e.g., mailbox based mechanism [18]).

References 1. Groth, P., Luck, M., Moreau, L.: A protocol for recording provenance in serviceoriented grids. In: 8th Intl Conf on Principle of Distributed Systems (PODIS2004), Grenoble, France (December 2004) 124–139

Provenance Provisioning

405

2. Wong, S.C., Miles, S., Fang, W.J., Groth, P., Moreau, L.: Provenance-based validation of e-science experiments. In: 4th Intl Semantic Web Conference. Volume 3729., Galway, Ireland (November 2005) 801–815 3. Rajbhandari, S., Wootten, I., Ali, A.S., Rana, O.F.: Evaluating provenance-based trust for scientific workflows. In: 6th IEEE Intl Symp on Cluster Computing and the Grid (CCGrid2006), Singapore (May 2006) 365–372 4. Townend, P., Groth, P., Xu, J.: A provenance-aware weighted fault tolerance scheme for service-based applications. In: 8th IEEE Intl Symp on Object-oriented Real-time Distributed Computing, USA (May 2005) 258–266 5. Foster, I., Voeckler, J., Wilde, M., Zhao, Y.: Chimera: A virtual data system for representing, querying, and automating data derivation. In: 14th Intl Conf on Scientific and Statistical Database Management. (July 2002) 37–46 6. Rajbhandari, S., Walker, D.W.: Support for provenance in a service-based computing Grid. In: UK e-Science All Hands Meeting 2004, UK (September 2004) 7. Zhao, J., Goble, C., Greenwood, M., Wroe, C., Stevens, R.: Annotating, linking and browsing provenance logs for e-Science. In: Wksp on Semantic Web Technologies for Searching and Retrieving Scientific Data (in conjunction with ISWC2003, CEUR Workshop Proceedings). Volume 83., Florida, USA (October 2003) 8. Bose, R., Frew, J.: Composing lineage metadata with XML for custom satellitederived data products. In: 16th Intl Conf on Scientific and Statistical Database Management, Washington, DC, USA (2004) 275–284 9. Simmhan, Y.L., Plale, B., Gannon, D.: A framework for collecting provenance in data-centric scientific workflows. In: IEEE Intl Conf on Web Services 2006 (ICWS 2006), Chicago, USA (September 2006) 10. Simmhan, Y.L., Plale, B., Gannon, D.: A survey of data provenance in e-Science. SIGMOD Record 34(3) (September 2005) 31–36 11. Groth, P., Jiang, S., Miles, S., Munrow, S., Tan, V., Tsasakou, S., Moreau, L.: An architecture for provenance systems. Technical report, Electronics and Computer Science, University of Southampton (October 2006) 12. Feng, Y.H., Cai, W.T.: MCCF: A distributed Grid job workflow execution framework. In: 2nd Intl Symposium on Parallel and Distributed Processing and Applications. Volume 3358., Hong Kong, China, LNCS (December 2004) 274–279 13. Brandt, R., Reiser, H.: Dynamic adaptation of mobile agents in heterogeneous environments. In: 5th Intl Conf on Mobile Agents (MA2001). Volume 2240., Atlanta, USA, LNCS (December 2001) 70–87 14. Fuggetta, A., Picco, G.P., Vigna, G.: Understanding code mobility. IEEE Trans on Software Engineering 24(5) (1998) 342–361 15. Feng, Y.H., Cai, W.T., Cao, J.N.: Communication partner identification in distributed job workflow execution over the Grid. In: 3rd Intl Wksp on Mobile Distributed Computing (in conjunction with ICDCS05), Columbus, Ohio, USA (June 2005) 587–593 16. Cabri, G., Leonardi, L., Zambonelli, F.: Coordination infrastructures for mobile agents. Microprocessors and Microsystems 25(2) (April 2001) 85–92 17. Berger, M.: Multipath packet switch using packet bundling. In: High Performance Switching and Routing (Workshop on Merging Optical and IP Technologies), Kobe, Japan (2002) 244–248 18. Cao, J.N., Zhang, L., Feng, X., Das, S.K.: Path pruning in mailbox-based mobile agent communications. J. of Info Sci and Eng 20(3) (2004) 405–242

EPLAS: An Epistemic Programming Language for All Scientists Isao Takahashi, Shinsuke Nara, Yuichi Goto, and Jingde Cheng Department of Information and Computer Sciences, Saitama University, 255 Shimo-Okubo, Sakura-ku, Saitama-shi, Saitama, 338-8570, Japan {isao, nara, gotoh, cheng}@aise.ics.saitama-u.ac.jp

Abstract. Epistemic Programming has been proposed as a new programming paradigm for scientists to program their epistemic processes in scientific discovery. As the first step to construct an epistemic programming environment, this paper proposes the first epistemic programming language, named ‘EPLAS’. The paper analyzes the requirements of an epistemic programming language, presents the ideas to design EPLAS, shows the important features of EPLAS, and presents an interpreter implementation of EPLAS. Keywords: Computer-aided scientific discovery, Epistemic process, Strong relevant logic, Scientific methodology.

1

Introduction

As a new programming paradigm, Cheng has proposed Epistemic Programming for scientists to program their epistemic processes in scientific discovery [3,4]. Conventional programming regards numeric values and/or character strings as the subject of computing, takes assignments as basic operations of computing, and regards algorithm as the subject of programming, but Epistemic Programming regards beliefs as the subject of computing, takes primary epistemic operations as basic operations of computing, and regards epistemic processes as the subject of programming [3,4]. Under the strong relevant logic model of epistemic processes proposed by Cheng, a belief is represented by a formula A ∈ F(EcQ) where EcQ is a predicate strong relevant logic [3,4] and F( EcQ) is the set of all well-formed formulas of EcQ. The three primary epistemic operations are epistemic deduction, epistemic expansion, and epistemic contraction. Let K ⊆ F(EcQ) be a set of sentences to represent the explicitly known knowledge and current beliefs of an agent, and TEcQ (P ) be a formal theory with premises P based on EcQ. For any A ∈ TEcQ (K) − K where TEcQ (K) = K, an epistemic deduction of A from K, denoted by K d+A , by the agent is defined as K d+A =df K ∪ {A}. For any A ∈ / TEcQ (K), an epistemic expansion of K by A, denoted by K e+A , by the agent is defined as K e+A =df K ∪ {A}. For any A ∈ K, an epistemic contraction of K by A, denoted by K −A , by the agent is defined as K −A =df K − {A}. An epistemic process of an agent is a sequence K0 , o1 , K1 , o2 , K2 , . . ., Kn−1 , Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 406–413, 2007. c Springer-Verlag Berlin Heidelberg 2007 

EPLAS: An Epistemic Programming Language for All Scientists

407

on , Kn where Ki (n ≥ i ≥ 0) is an epistemic state, and oi+1 (n > i ≥ 0) is any of primary epistemic operations, and Ki+1 is the result of applying oi+1 to Ki . In particular, K0 is called the primary epistemic state of the epistemic process, Kn is called the terminal epistemic state of the epistemic process, respectively. An epistemic program is a sequence of instructions such that for a primary epistemic state given as the initial input, an execution of the instructions produces an epistemic process where every primary epistemic operation corresponds to an instruction whose execution results in an epistemic state, in particular, the terminal epistemic state is also called the result of the execution of the program [3,4]. However, until now, there is no environment to perform Epistemic Programming and to run epistemic programs. We propose the first epistemic programming language, named ‘EPLAS’: an Epistemic Programming Language for All Scientists. In this paper, we analyze the requirements of an epistemic programming language at first, and then, present our design ideas for EPLAS and its important features. We also present an interpreter implementation of EPLAS.

2

Requirements

We define the requirements for an epistemic programming language and its implementation. We define R1 in order to write and execute epistemic programs. R 1. They should provide ways to represent beliefs and epistemic states as primary data, and epistemic operations as primary operations, and perform the operations. Since, in Epistemic Programming, the subject of computing is a belief, basic operations of computing are epistemic operations, and the subject of the operations are epistemic states. We also define R2, R3, and R4 in order to write and execute epistemic programs to help scientists with scientific discovery. R 2. They should represent and execute operations to perform deductive, inductive, and abductive reasoning, as primary operations. Scientific reasoning is indispensable to any scientific discovery because any discovery must be previously unknown or unrecognized before the completion of discovery process and reasoning is the only way to draw new conclusions from some premises that are known facts or assumed hypothesis [3,4]. Therefore, reasoning is one of ways to get new belief when scientists perform epistemic expansion. R 3. They should represent and execute operations to help with dissolving contradictions as primary operations. Scientists perform epistemic contraction in order to dissolve contradictions because beliefs may be inconsistent and incomplete. R 4. It should represent and execute operations to help with trial-and-error as primary operations. Scientists do not always accurately know subjects of scientific discovery beforehand. Therefore, scientists must perform trial-and-error.

408

I. Takahashi et al.

In a process of trial-and-error, scientists make many assumptions and test the assumptions by many different methods. It is a demanding work for scientists to make combinations of the assumptions and the methods. Furthermore, it is also demanding for scientists to test the combinations one at a time without omission.

3

EPLAS

EPLAS is designed as a typical procedural and strongly dynamic typed language. It has facilities to program control structures (they are if-then statement, dowhile statement, and foreach statement) and procedures, and has a nested staticscope rule. With attribute grammar, we defined syntax and semantics of EPLAS and open EPLAS manual [1]. To satisfy R1, EPLAS should provide beliefs as a primary data type. For that purpose, EPLAS provides a way to represent beliefs as a primary data type. EPLAS also provides an operation to input a belief from standard input, which is denoted by ‘input belief’. Conventional programming languages do not provide beliefs as a primary data type because their subjects of computing are lower-level data types. Then, to satisfy R1, EPLAS should provide epistemic states as a primary data type. Therefore, EPLAS provides sets of beliefs as set structured data type and assures that all epistemic states are numbered. EPLAS also provides an operation to get the i–th epistemic state, which is denoted by ‘get state(i)’. Almost conventional programming languages do not provide a set structured data type and do not assure that all epistemic states are numbered. Furthermore, EPLAS should provide epistemic operations as primary operations to satisfy R1. Hence, EPLAS provides operations to perform epistemic deduction, epistemic expansion, and epistemic contraction by multiple beliefs as extensions as primary operations. An epistemic deduction is denoted by ‘deduce’, and makes the current epistemic state Ki the next epistemic state Ki+1 = Ki ∪S for any S ⊆ TEcQ (Ki ) − Ki where Ki ⊆ F(EcQ) and TEcQ = Ki . An epistemic expansion of multiple beliefs S is denoted by ‘expand(S)’, and makes the current epistemic state Ki the next epistemic state Ki+1 = Ki ∪S for any S  TEcQ (Ki ) where Ki ⊆ F(EcQ). An epistemic contraction by multiple beliefs S is denoted by ‘contract(S)’, and makes the current epistemic state Ki the next epistemic state Ki+1 = Ki − S for any S ⊂ Ki where Ki ⊆ F(EcQ). Some conventional programming languages provide operations to perform epistemic expansion and epistemic contraction as set operations but any conventional programming languages do not provide an operation to perform epistemic deduction. There are three forms of reasoning: deductive, inductive, and abductive reasoning. Therefore, an epistemic programming language should provide operations to perform reasoning by these three forms. In order to satisfy R2, EPLAS should provide a way to represent various reasoning, at least, reasoning by the three forms. For that purpose, EPLAS provides inference rules as a primary data type. Inference rules are formulated with some schemata of well-formed formulas to reason by pattern matching,

EPLAS: An Epistemic Programming Language for All Scientists

409

and consist of at least one schemata of well-formed formula as premises and at least one schemata of well-formed formulas as conclusions [7]. Let K be premises including ‘{P0(a0), P0(a1), P1(a0), ∀x0(P2(x0) → P1(x0))}’, ir1 be an inference rule: ‘P0(x0), P0(x1) ∀x2P0(x2)’, ir2 be an inference rule: ‘P0(x0), P1(x0), P0(x1) P1(x1)’, and ir3 be an inference rule: ‘P2(x0) → P1(x1), P1(x1)

P2(x0)’. ir1 means an inductive generalization [5], ir2 means an arguments from analogy [5], and ir3 means an abductive reasoning [8]. ‘∀x2P0(x2)’ is derived from K by ir1 , ‘P1(a1)’ is derived from K by ir2 , and ‘P2(a0)’ is derived from K by ir3 . EPLAS also provides an operation to input an inference rule from standard input, which is denoted by ‘input rule’. Moreover, in order to satisfy R2, EPLAS should provide an operation to derive conclusions from beliefs in the current epistemic state by applying an inference rule to the beliefs and to perform epistemic expansion by the derived conclusions. A reasoning by an inference rule ir is denoted by ‘reason(ir)’, and makes the current epistemic state Ki the next epistemic state Ki+1 = Ki ∪ S where Ki ⊆ F(EcQ), S ⊂ Rir (Ki ), S = φ, and Rir (Ki ) is a set of beliefs derived from Ki by an inference rule ir. Any conventional programming languages do not provide inference rules as a primary data type and a reasoning operation as a primary operation. As operations to help with dissolving contradictions, at least, an epistemic programming language should provide an operation to judge whether two beliefs are conflicting or not. Then, it should provide operations to output a derivation tree of a belief and to get all beliefs in a derivation tree of a belief in order for scientists to investigate causes of contradictions. It should also provide an operation to perform epistemic contraction of beliefs derived from a belief in order for scientists to reject beliefs derived from a conflicting belief. In order to satisfy R3, EPLAS should provide an operation to judge whether two beliefs are conflicting or not. For that purpose, EPLAS provides the operation as a primary operation, which is denoted by ‘$$’. The binary operator ‘$$’ is to judge whether two beliefs are conflicting or not, and returns true if and only if one belief A is negation of the other belief B, or false if not so. Then, in order to satisfy R3, EPLAS should provide operations to output a derivation tree of a belief. Hence, EPLAS provides an operation to output a derivation tree of a belief A from standard output, which is denoted by ‘see tree(A)’. EPLAS should also provide an operation to get all beliefs in a derivation tree of a belief. Therefore, EPLAS provides an operation to get beliefs in the derivation tree of a belief A, which is denoted by ‘get ancestors(A)’. Furthermore, in order to satisfy R3, EPLAS should provide an operation to perform epistemic contractions of all beliefs derived by the specific beliefs. For that purpose, EPLAS provide the operation to perform an epistemic contractions of beliefs derived by the specific beliefs S, which is denoted by ‘contract derivation(S)’, and which makes the current epistemic state Ki the next epistemic state Ki+1 = Ki − (TEcQ (Ki ) − TEcQ (Ki − S)) where Ki ⊆ F(EcQ). Evidently, conventional programming languages do not provide these operations. An epistemic programming language should provide operations to make combinations of beliefs and to test each combination in order to verify combinations

410

I. Takahashi et al.

of assumptions, at least, as operations to help with trial-and-error. It also should provide operations to make permutations of procedures and to test each permutation in order for scientists to test many methods by various turns. Furthermore, it should provide an operation to make the current epistemic state change into a past epistemic state in order for scientists to test assumptions by various methods in same epistemic state. To satisfy R4, EPLAS should provide operations to make combinations of beliefs and to test each combination. For that purpose, EPLAS provides sets of sets as a set-set structured type and set operations, e.g. , sum, difference, intersection, power, and direct product. Some conventional programming languages provide the structured data type and the set operations but almost conventional programming languages do not. Then, to satisfy R4, EPLAS should provide operations to make permutations of procedures and to test each permutation. Hence, EPLAS provide procedures as a primary data type and sequences as a seq structured type, and sequence operations, e.g. , appending to the bottom, dropping from the bottom. A procedure is a name of a procedure with arguments, and is similar to a function pointer in C languages. In order to satisfy R4, furthermore, EPLAS should provide an operation to change the current epistemic state into a past one identified by a number. EPLAS provides the operation, which is denoted by ‘return to(n)’, and makes the current epistemic state Ki the next epistemic state Ki+1 = Kn where Kn is n–th epistemic state.

4

An Interpreter Implementation of EPLAS

We show an interpreter implementation of EPLAS. We implemented the interpreter with Java (J2SE 6.0) in order for the interpreter to be available on various computer environments. We, however, implemented the interpreter by naive methods because the interpreter is a prototype as the first step to construct an epistemic programming environment. The interpreter consists of the analyzer section and the attribute evaluation section. The analyzer section analyzes a program in an input source file and makes a parse tree. It has been implemented with SableCC [6]. Accordingly to semantic rules in the attributes grammar of EPLAS, the attribute evaluation section evaluates attributes on a parse tree made by the analyzer section. The attribute evaluation section has the symbol table, the beliefs manager, the epistemic states manager, and the reasoner to evaluate attributes. The symbol table manages declaration of variables, functions, and procedures, data types and structured types of variables. It also assure that EPLAS is a strongly dynamic typed language. We implemented it with a hash table by a popular method. The beliefs manager manages all input and/or derived beliefs and all their derivations trees, and provides functions to perform input belief, see tree, and get ancestors. We implemented the beliefs manager as follows. The beliefs manager has a set of tuples where a tuple consists of a belief and a derivation tree of the belief. When performing input rule, the beliefs manager analyzes

EPLAS: An Epistemic Programming Language for All Scientists

411

Table 1. Vocabulary of A Language Producing Beliefs Vocabulary Symbols Constants a0, a1, ..., ai, ... x0, x1, ..., xi, ... Variables Functions f0, f1, ..., fi, ... Predicates P0, P1, ..., Pi, ... Connectives =>(entailment), &(and), !(negation) Quantifiers @(forall), #(exists) Punctuation (, ), ,

an input string according to a belief form. The belief is formed by a language including vocabulary in Table 1 and the following Production Rules 1 and 2. Production Rule 1. Term (1) Any constant is a term and any variable is also a term. (2) If f is a function and t0, ..., tm are terms then f(t0, ..., tm) is a term. (3) Nothing else is a term. Production Rule 2. Formula (1) If P is a predicate and t0, ..., tm are terms then P(t0, ..., tm) is a formula. (2) If A and B are formulas then (A => B), (A & B), and (! A) are formulas. (3) If A is a formula and x is a variable then (@xA), (#xA) are formulas. (4) Nothing else is a formula. The beliefs manager also adds new tuple of an input belief and a tree which has only root node denoting the input belief into the set. When performing see tree(A), the beliefs manager outputs a derivation tree of A with JTree. When performing get ancestors(A), the beliefs manager collects beliefs in a derivation tree of A by scanning the derivation tree and returns the beliefs. The epistemic states manager manages all epistemic states from the primary epistemic state to the terminal epistemic state, and provides functions to perform expand, contract, contract derivation, return to, get state, and get id. We implemented the epistemic states manager as follows. The epistemic states has a sequence of sets of beliefs where a set of beliefs is an epistemic state, and the sequence is variable-length. When performing expand(S), the epistemic states manager appends a sum of a set of the bottom of the sequence and S to the sequence. When performing contract(S), the epistemic states manager appends a difference of a set of the bottom of the sequence and S. When performing contract derivation(S), the epistemic states manager appends a difference of a set of the bottom of the sequence and all beliefs in derivation trees of S to the sequence. When performing return to(i), the epistemic states manager appends

412

I. Takahashi et al.

i–th epistemic state to the sequence. When performing get id, the epistemic states manager returns a number of elements of the sequence. When performing get state(i), the epistemic states manager returns a set of beliefs of the i–th element of the sequence. The reasoner provides functions to perform input rule and reason. We implemented the epistemic states manager as follows. There is an automated forward deduction system for general purpose entailment calculus EnCal [2,7]. EnCal automatically deduces new conclusions from given premises by applying inference rules to the premises and deduced results. Therefore, the reasoner has been implemented as an interface to EnCal. When performing input rule, the reasoner analyzes an input string according to an inference rule form. The inference rule form is “SLogicalSchema1, · · ·, SLogicalScheman, SLogicalScheman+1.” “SLogicalSchema” is formed by a language including vocabulary in Table 2 and the following Production Rules 1 and 3. Table 2. Vocabulary of A Language Producing Semi Logical Schema Vocabulary Constants Variables Functions Predicates Predicate Variable Formula Variable Connectives Quantifiers Punctuation

Symbols a0, a1, ..., ti, ... x0, x1, ..., xi, ... f0, f1, ..., fi, ... P0, P1, ..., Pi, ... X1, X1, ..., Xi, ... A0, A1, ..., Ai, ... =>(entailment), &(and), !(negation) @(forall), #(exists) (, ), ,

Production Rule 3. Semi logical Schema (1) Any formula variable is a semi logical formula. (2) If P is a predicate or a predicate variable and t0, ..., tm are terms then P(t0, ..., tm) is a semi logical formula. (3) If A and B are semi logical formulas then (A => B), (A & B), and (! A) are semi logical formulas. (4) If A is a semi logical formula and x is a variable then (@xA), (#xA) are semi logical formulas. (5) Nothing else is a semi logical formula. When performing reason(ir), the reasoner translates ir into an inference rule as an EnCal form and current beliefs which the epistemic states manager returns into formulas as an EnCal form, inputs these data to EnCal and executes EnCal, and then, gets the formulas derived by ir, translates the formulas into an EPLAS form, and makes the beliefs manager registers the formulas.

EPLAS: An Epistemic Programming Language for All Scientists

5

413

Concluding Remarks

As the first step to construct an epistemic programming environment, we proposed the first epistemic programming language, named ‘EPLAS’, and its interpreter implementation. EPLAS provides ways for scientists to write epistemic programs to help scientists with reasoning, dissolving contradictions, and trial-and-error. We also presented an interpreter implementation of EPLAS. We have provided the first environment to perform Epistemic Programming and run epistemic programs. In future works, we would like to establish Epistemic Programming methodology to make scientific discovery become a ‘science’ and/or an ‘engineering’.

Acknowledgments We would like to thank referees for their valuable comments for improving the quality of this paper. The work presented in this paper was supported in part by The Ministry of Education, Culture, Sports, Science and Technology of Japan under Grant-in-Aid for Exploratory Research No. 09878061, and Grant-in-Aid for Scientific Research (B) No. 11480079.

References 1. AISE Lab., Saitama University.: EPLAS Reference Manual. (2007) 2. Cheng, J.: Encal: An automated forward deduction system for general–purpose entailment calculus. In Terashima, N., Altman, E., eds.: Advanced IT Tools, IFIP World Conference on IT Tools, IFIP96 - 14th World Computer Congress. Chapman & Hall (1996) 507–517 3. Cheng, J.: Epistemic programming: What is it and why study it? Journal of Advanced Software Research 6(2) (1999) 153–163 4. Cheng, J.: A strong relevant logic model of epistemic processes in scientific discovery. In Kawaguchi, E., Kangassalo, H., Jaakkola, H., Hamid, I.A., eds.: ”Information Modelling and Knowledge Bases XI,” Frontiers in Artificial Intelligence and Applications. Volume 61., IOS Press (2000) 136–159 5. Flach, P.A., Kakas, A.C.: Abductive and inductive reasoning: background and issues. In Flach, P.A., Kakas, A.C., eds.: Abduction and Induction: Essays on Their Relation and Integration. Kluwer Academic Publishers (2000) 6. Gagnon, E.M., Hendren, L.J.: Sablecc http://www.sablecc.org. 7. Nara, S., Omi, T., Goto, Y., Cheng, J.: A general-purpose forward deduction engine for modal logics. In Khosla, R., Howlett, R.J., Jain, L.C., eds.: KnowledgeBased Intelligent Information and Engineering Systems, 9th International Conference, KES2005, Melbourne, Australia, 14-16 September, 2005, Proceedings, Part II. Volume 3682., Springer-Verlag (2005) 739–745 8. Peirce, C.S.: Collected Papers of Charles Sanders Peirce. Harvard University Press (1958)

Translation of Common Information Model to Web Ontology Language Marta Majewska1 , Bartosz Kryza2 , and Jacek Kitowski1,2 1 2

Institute of Computer Science AGH-UST, Mickiewicza 30, 30-059 Krakow, Poland Academic Computer Centre Cyfronet-AGH, Nawojki 11, 30-950 Krakow, Poland {mmajew, bkryza, kito}@agh.edu.pl

Abstract. This paper presents a brief overview of the work on translation of Common Information Model (CIM) to Web Ontology Language (OWL) standard. The main motivation for the work is given, along with discussion of major issues faced during this work. The paper contains also comparison of existing approaches to conversion of CIM to OWL and presents the CIM2OWL tool that performs the conversion of CIM schema and allows convertion of CIM instances - representing for instance configurations of particular systems - to OWL individuals. Key words: Metadata, ontologies, Grid computing, Common Information Model, ontology translation.

1

Introduction

Several researchers have risen lately the issue of translation of existing and widely recognized Distributed Management Task Force (DMTF) standard for resource description called Common Information Model (CIM) [1] to Web Ontology Language (OWL) [2]. Especially in the Grid setting, where OWL could be used for representation of semantic Grid metadata, the problem of the interoperability appears. Among reference ontologies for modeling the hardware and software computer resources the DMTF Common Information Model as well known, organizationally supported and regularly updated meta-model for the considered area (e.g. popularly referred in software for management of systems, networks, users and applications across multiple vendor environments) seemed promising. CIM is a hybrid approach, inspired by the object oriented modeling and database information modeling. The CIM Schema consists in particular of Core and Common Models as well as developed by users Extension Schemas. As it introduces the metadata for annotating model classes and instances, it is partially not compliant with the UML methodology. OWL is W3C recommended ontology language for the Semantic Web, which exploits many of the strengths of Description Logics, including well defined semantics and practical reasoning techniques. OWL offers greater expressiveness of information content description then that supported by XML, RDF, and RDF Schema, by providing additional vocabulary along with some formal semantics. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 414–417, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Translation of Common Information Model to Web Ontology Language

2

415

CIM to OWL Mapping

In the beginning, the mapping between semantically equivalent constructs in MOF and OWL was established. That included class definition, inheritance (to some extent), data type attributes, cardinality constraints, comments, etc. The succeeding step in definition of the mapping was to extend the existing mapping with representations of MOF constructs, which do not have direct equivalents in OWL. Table 1. The mapping definition from CIM to OWL CIM Artifact Class Generalization

OWL Construct



Association 0.5

The impact of the ratio of output data size to input data size is shown in Fig 3. ADLT model performs well for communication intensive applications that generate small output data compared to input data size (low oiRatio ). For computation intensive applications, the ratio of output data size to input data size does not affect the performance of the algorithms much unless when ccRatio is 10000.

5

Conclusion

The problem of scheduling data-intensive loads on grid platforms is addressed. We used the divisible load paradigm to derive closed-form solutions for

Adaptive Divisible Load Model

453

processing time considering communication time. In the proposed model, the optimality principle was utilized to ensure an optimal solution. The experiment results of the proposed model ADLT show better performance as compared to the CDLT model in terms of expected processing time and load balancing.

References 1. Holtman K., et al.: CMS Requirements for the Grid, In proceeding of the International Conference on Computing in High Energy and Nuclear Physics, Science Press, Beijing China, (2001). 2. Robertazzi T.G.: Ten Reasons to Use Divisible Load Theory, IEEE Computer, 36(5) (2003) 63-68. 3. Wong H. M., Veeravalli B., Dantong Y., and Robertazzi T. G.: Data Intensive Grid Scheduling: Multiple Sources with Capacity Constraints, In proceeding of the IASTED Conference on Parallel and Distributed Computing and Systems, Marina del Rey, USA, (2003). 4. Kim S., Weissman J. B.: A Genetic Algorithm Based Approach for Scheduling Decomposable Data Grid Applications, In proceeding of the International Conference on Parallel Processing, IEEE Computer Society Press, Washington DC USA, (2004). 5. Venugopal S., Buyya R., Ramamohanarao, K.: A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing, ACM Computing Surveys, 38(1) (2006) 1-53. 6. Mequanint, M.: Modeling and Performance Analysis of Arbitrarily Divisible Loads for Sensor and Grid Networks, PhD Thesis, Dept. Electrical and Computer Engineering, Stony Brook University, NY 11794, USA, (2005). 7. Bharadwaj, V., Ghose D., Robertazzi T. G.: Divisible Load Theory: A New Paradigm for Load Scheduling in Distributed Systems, Cluster Computing, 6, (2003) 7-17. 8. Jaechun N., Hyoungwoo, P.: GEDAS: A Data Management System for Data Grid Environments. In Sunderam et al. (Eds.): Computational Science. Lecture Notes in Computer Science, Vol. 3514. Springer-Verlag, Berlin Heidelberg New York (2005) 485-492.

Providing Fault-Tolerance in Unreliable Grid Systems Through Adaptive Checkpointing and Replication Maria Chtepen1 , Filip H.A. Claeys2 , Bart Dhoedt1 , Filip De Turck1 , Peter A. Vanrolleghem2 , and Piet Demeester1 Department of Information Technology (INTEC), Ghent University, Sint-Pietersnieuwstraat 41, Ghent, Belgium {maria.chtepen, bart.dhoedt, filip.deturck}@intec.ugent.be 2 Department of Applied Mathematics, Biometrics and Process Control (BIOMATH), Ghent University, Coupure Links 653, Ghent, Belgium {filip.claeys, peter.vanrolleghem}@biomath.ugent.be 1

Abstract. As grids typically consist of autonomously managed subsystems with strongly varying resources, fault-tolerance forms an important aspect of the scheduling process of applications. Two well-known techniques for providing fault-tolerance in grids are periodic task checkpointing and replication. Both techniques mitigate the amount of work lost due to changing system availability but can introduce significant runtime overhead. The latter largely depends on the length of checkpointing interval and the chosen number of replicas, respectively. This paper presents a dynamic scheduling algorithm that switches between periodic checkpointing and replication to exploit the advantages of both techniques and to reduce the overhead. Furthermore, several novel heuristics are discussed that perform on-line adaptive tuning of the checkpointing period based on historical information on resource behavior. Simulationbased comparison of the proposed combined algorithm versus traditional strategies based on checkpointing and replication only, suggests significant reduction of average task makespan for systems with varying load. Keywords: Grid computing, fault-tolerance, adaptive checkpointing, task replication.

1

Introduction

A typical grid system is an aggregation of (widespread) heterogeneous computational and storage resources managed by different organizations. The term “heterogeneous” addresses in this case not only hardware heterogeneity, but also differences in resources utilization. Resources connected into grids can be dedicated supercomputers, clusters, or merely PCs of individuals utilized inside the grid during idle periods (so-called desktop grids). As a result of this highly autonomous and heterogeneous nature of grid resources, failure becomes a commonplace feature that can have a significant impact on the system performance. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 454–461, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Providing Fault-Tolerance in Unreliable Grid Systems

455

A failure can occur due to a resource or network corruption, temporary unavailability periods initiated by resource owners, or sudden increases in resource load. To reduce the amount of work lost in the presence of failure, two techniques are often applied: task checkpointing and replication. The checkpointing mechanism periodically saves the status of running tasks to a shared storage and uses this data for tasks restore in case of resource failure. Task replication is based on the assumption that the probability of a single resource failure is much higher than of a simultaneous failure of multiple resources. The technique avoids task recomputation by starting several copies of the same task on different resources. Since our previous work [1] has extensively studied the task replication issue, this paper is mainly dedicated to the checkpointing approach. The purpose of checkpointing is to increase fault-tolerance and to speed up application execution on unreliable systems. However, as was shown by Oliner et al. [2], the efficiency of the mechanism is strongly dependent on the length of the checkpointing interval. Overzealous checkpointing can amplify the effects of failure, while infrequent checkpointing results in too much recomputation overhead. As may be presumed, the establishment of an optimal checkpointing frequency is far from a trivial task, which requires good knowledge of the application and the distributed system at hand. Therefore, this paper presents several heuristics that perform on-line adaptive tuning of statically provided checkpointing intervals for parallel applications with independent tasks. The designed heuristics are intended for incorporation in a dynamic scheduling algorithm that switches between job replication and periodic checkpointing to provide fault-tolerance and to reduce potential job delay resulting from the adoption of both techniques. An evaluation in a simulation environment (i.e. DSiDE [3]) has shown that the designed algorithm can significantly reduce the task delay in systems with varying load, compared to algorithms solely based on either checkpointing or replication. This paper is organized as follows: in Section 2 the related work is discussed; the assumed system model is presented in Section 3; Section 4 elaborates on the adaptive checkpointing heuristics and the proposed scheduling algorithm; simulation results are introduced in Section 5; Section 6 summarizes the paper and gives a short overview of future work.

2

Related Work

Much work has already been accomplished on checkpointing performance prediction and determination of the optimal checkpointing interval for uniprocessor and multi-processor systems. For uniprocessor systems, selection of such an interval is for the most part a solved problem [4]. The results for parallel systems are less straightforward [5] since the research is often based on particular assumptions, which reduce the general applicability of the proposed methods. In particular, it is generally presumed that failures are independent and identically distributed. Studies of real systems, however, show that failures are correlated temporally and spatially, are not identically distributed. Furthermore, the behavior of checkpointing schemes under these realistic failure distributions does not follow the behavior predicted by standard checkpointing models [2,6].

456

M. Chtepen et al.

Since finding the overall optimal checkpointing frequency is a complicated task, other types of periodic checkpointing optimization were considered in literature. Quaglia [7] presents a checkpointing scheme for optimistic simulation, which is a mixed approach between periodic and probabilistic checkpointing. The algorithm estimates the probability of roll-back before the execution of each simulation event. Whenever the event execution is going to determine a large simulated time increment then a checkpoint is taken prior to this event, otherwise a checkpoint is omitted. To prevent excessively long suspension of checkpoints, a maximum number of allowed event executions between two successive checkpoint operations is fixed. Oliner [8] proposes so-called cooperative checkpointing approach that allows the application, compiler, and system to jointly decide when checkpoints should be performed. Specifically, the application requests checkpoints, which have been optimized for performance by the compiler, and the system grants or denies these requests. This approach has a disadvantage that applications have to be modified to trigger checkpointing periodically. Job replication and determination of the optimal number of replicas are other rich fields of research [1,9]. However, to our knowledge, no methods dynamically altering between replication and checkpointing were introduced so far.

3

The System Model

A grid system running parallel applications with independent tasks is considered. The system is an aggregation of geographically dispersed sites, assembling collocated interchangeable computational (CR) and storage (SR) resources, and a number of services, such as a scheduler (GSched) and an information service (IS). It is assumed that all the grid components are stable except for CRs. The latter possess a varying failure and restore behavior, which is modelled to mimic reality as much as possible. As outlined by Zhang et al. [6], failures in large-scale distributed systems are mostly correlated and tend to occur in bursts. Besides, there are strong spatial correlations between failures and nodes, where a small fraction of the nodes incur most of the failures. The checkpointing mechanism is either activated by the running application or by the GSched. In both cases, it takes W seconds before the checkpoint is completed and thus can be utilized for an eventual job restore. Furthermore, each checkpoint adds V seconds of overhead to the job run-time. Both parameters largely depend on size C of the saved job state. There is also a recovery overhead P , which is the time required for a job to restart from a checkpoint. Obviously, the overhead introduced by periodic checkpointing and restart may not exceed the overhead of the job restores without use of checkpointing data. To limit the overhead, a good choice of checkpointing frequency I is of cruj , which is the optimal cial importance. Considering the assumed grid model, Iopt checkpointing interval for a job j, is largely determined by the following function j = f (Erj , Fr , C j ), where Erj is the execution time of j on the resource r and Iopt Fr stands for the mean time between failures of r. Additionally, the value of

Providing Fault-Tolerance in Unreliable Grid Systems

457

Iopt should be within the limits V < Iopt < Er to make sure that jobs make execution progress despite of periodic checkpointing. The difficulty of finding Iopt is in fact that it is hard to determine the exact values of the application and system parameters. Furthermore, Er and Fr can vary over time as a consequence of changing system loads and resource failure/restore patterns. This suggests that a statically determined checkpointing interval may be an inefficient solution when optimizing system throughput. In what follows, a number of novel heuristics for adaptive checkpointing are presented.

4 4.1

Adaptive Checkpointing Strategies Last Failure Dependent Checkpointing (LFDC)

One of the main disadvantages of unconditional periodic task checkpointing (UTC) is that it performs identically whether the task is executed on a volatile or a stable resource. To deal with this shortcoming, LFDC adjusts the initial job checkpointing interval to the behavior of each individual resource r and the total execution time of the considered task j, which results in a customized checkpointing frequency Irj . For each resource a timestamp Trf of its last detected failure is kept. When no failure has occurred so far, Trf is initiated with the system “start” time. GSched evaluates all checkpointing requests and allows only these for which the comparison Tc − Trf ≤ Erj evaluates to true, where Tc is the current system time. Otherwise, the checkpoint is omitted to avoid unnecessary overhead as it is assumed that the resource is “stable”. To prevent excessively long checkpoints suspension, a maximum number of checkpoint omissions can be defined, similar to the solution proposed in [7]. 4.2

Mean Failure Dependent Checkpointing (MFDC)

Contrary to LFDC, MFDC adapts the initial checkpointing frequency in function of a resource mean failure interval (M Fr ), which reduces the effect of an individual failure event. Furthermore, the considered job parameter is refined from the total job length to the estimation of the remaining execution time (RErj ). Each time the checkpointing is performed, MFDC saves the task state and modifies the initial interval I to better fit specific resource and job characteristics. The adapted interval Irj , is calculated as follows: if r appears to be sufficiently stable or the task is almost finished (RErj < M Fr ) the frequency of checkpointing will be reduced by increasing the checkpointing interval Irj = Irj + I; in the other case it is desirable to decrease Irj and thus to perform checkpointing more frequently Irj = Irj − I. To keep Irj within a reasonable range, MFDC always checks the newly obtained values against predefined boundaries, in such a way that Imin ≤ Irj ≤ Imax . Both Imin and Imax can either be set by the application or initialized with default values Imin = V + (Erj /100) and Imax = V + (Erj /2). In both equations the V term ensures that time between consecutive checkpoints is never less than the time overhead added by each checkpoint, in which case

458

M. Chtepen et al.

more time is spent on checkpointing than performing useful computations. After the Irj interval expires, either the next checkpointing event is performed, or a flag is set indicating that the checkpointing can be accomplished as soon as the application is able to provide a consistent checkpoint. In case of rather reliable systems, the calibration of checkpointing interval can be accelerated by replacing the default increment value I by a desirable percentage of total or remaining task execution time. 4.3

Adaptive Checkpoint and Replication-Based Scheduling (ACRS)

Checkpointing overhead can be avoided by providing another means to achieve system fault-tolerance. Replication is an efficient and almost costless solution, if the number of task copies is well chosen and there is a sufficient amount of idle computational resources [1]. On the other hand, when computational resources are scarce, replication is undesirable as it delays the start of new jobs. In this section, an adaptive scheme is proposed that dynamically switches between task checkpointing and replication, based on run-time information about system load. When the load is low, the algorithm is in “replication mode”, where all tasks with less than R replicas are considered for submission to the available resources. Different strategies can be defined with regard to the assigment order. The strategy applied in this paper processes jobs according to ascending active replica numbers, which reduces the wait time of new jobs. The selected task and the replica is then submitted to a grid site s with minimal load Loadmin s minimum number of identical replicas. The latter is important to reduce the is calculated as follows: chance of simultaneous replica failure. Loadmin s   = mins∈S (( nr )/( M IP Sr )), (1) Loadmin s r∈s

r∈s

where S is the collection of all sites, nr is the number of tasks on the resource r; and M IP Sr (Million Instructions Per Second) is the CPU speed of r. Inside the chosen site, the task will be assigned to the least loaded available resource with the smallest number of task replicas. The resource load Loadr is determined as Loadr = nr /M IP Sr .

(2)

The algorithm switches to the “checkpointing mode” when idle resource availability IR drops to a certain limit L. In this mode, ACRS rolls back, if necessary, the earlier distributed active task replicas ARj and starts task checkpointing. When processing the next task j the following situations can occur: – ARj > 0: start checkpointing of the most advanced active replica, cancel execution of other replicas – ARj = 0 and IR > 0: start j on the least loaded available resource within the least loaded site, determined respectively by (2) and (1) – ARj = 0 and IR = 0: select a random replicated job i if any, start checkpointing of its most advanced active replica, cancel execution of other replicas of i, submit j to the best available resource.

Providing Fault-Tolerance in Unreliable Grid Systems

5

459

Simulation Results

Performance of the proposed methods was evaluated using the DSiDE simulator, on the bases of a grid model composed of 4 sites (3 CRs each) with varying availability. The availability parameter, which is defined to be the fraction of the total simulation time that the system spends performing useful work, is modelled as a variation of the CR’s failure and restore events [1]. Distribution of these events is identical for each resource inside the same site and is depicted in Table 1. The table also shows the distributions with which the burst-based correlated nature of resource failures is approached. A failure event namely triggers the whole burst (see “Burst size”) of connected resource malfunctions spread within a relatively short time interval (see “Burst distribution”). To simplify the algorithm comparisons, a workload composed of identical tasks with the following parameters was considered: S = 30min, In(inputsize) = Out(outputsize) = 2M B, W = 9s, V = 2s, P = 14s. Furthermore, each CR has 1 MIPS CPU speed and is limited to process at most 2 jobs simultaneously. Initially, the grid is heavily loaded since tasks are submitted in bursts of 6 up to 25 tasks followed by a short (5 to 20 min) idle period. It is also assumed that the application can start generating the next checkpoint, as soon as it is granted permission from GSched. The described grid model is observed during 24 hours of simulated time. Table 1. Distributions of site failure and restore events together with distributions of the number and frequency of correlated failures

Site Site Site Site

Failure Restore Burst size 1 Uniform:1-300(s) Uniform:1-300(s) Uniform:1-3 2 Uniform:1-900(s) Uniform:1-300(s) Uniform:1-3 3 Uniform:1-2400(s) Uniform:1-300(s) Uniform:1-3 4 No failure -

Burst distribution Uniform:300-600(s) Uniform:300-600(s) Uniform:300-600(s) -

The left part of Figure 1 shows the comparison, in terms of successfully executed tasks, between UPC, LFDC and MFDC checkpointing heuristics. The comparison is performed for a varying initial checkpointing frequency. For the MFDC algorithm Imin is set to the default value, while no limitation on the Imax is imposed. The results show that the performance of UPC strongly depends on the chosen checkpointing frequency. As can be seen on the UPC curve, excessively frequent checkpointing penalizes system performance to a greater extent than insufficient checkpointing. It can be explained by the fact that the considered grid model is relatively stable, with a total system availability of around 75%. In this case the checkpointing overhead exceeds the overhead of task recomputation. LFDC partially improves the situation by omitting some checkpoints. Since the algorithm doesn’t consider checkpoint insertion, the performance for an excessively long checkpointing interval is the same as for UPC. Finally, the fully dynamic scheme of MFDC proves to be the most effective. Starting from a random checkpointing frequency it guarantees system performance close to the one provided by UPC with an optimal checkpointing interval.

460

M. Chtepen et al.

Fig. 1. (a) UPC, LFDC and MFDC checkpointing strategies performance; (b) CC, CR and ACRS scheduling performance

Finally, the ACRS (R = 2, L = 7, I = 30) is compared against common checkpointing (CC) and replication-based (CR) algorithms. The term “common checkpointing algorithm” refers to an algorithm monitoring resource downtimes and restarting failed jobs from their last saved checkpoint. The considered CC algorithm, as well as ACRS, makes use of the MFDC heuristic with I = 30 to determine the frequency of task checkpointing. The principle of the replicationbased algorithm is quite straightforward: R = 2 replicas of each task are executed on preferably different resources, if a replica fails it is restarted from scratch [1]. It is clear that replication-based techniques can be efficient only when the system possesses some free resources. Fortunately, most of the observed real grids alternate between peak periods and periods with relatively low load. To simulate this behavior, the initial task submission pattern is modified to include 2 users: the first sends to the grid a limited number of tasks every 35-40 min; the second launches significant batch sizes of 15 up to 20 tasks every 3-5 hours. Figure 1 (right) shows the observed behavior of the three algorithms. During the simulations the number of tasks simultaneously submitted by the first user is modified as shown in the figure, which results in variations in system load among different simulations. When the system load is sufficiently low, ACRS and CC process an equal number of tasks, since each submitted task can be assigned to some resource. However, ACRS results in lower average task makespan. When the system load increases, ACRS switches to checkpointing mode after a short transitive phase. Therefore, the algorithm performs almost analogous to CC, except for a short delay due to the mode switch. Finally, CR considerably underperforms the other algorithms with respect to the number of executed tasks and average makespan. In the considered case ACRS provided for up to 15% reduction of the average task makespan compared to CC. The performance gain certainly depends on the overhead of checkpoints and the number of generated checkpointing events, which is close to optimal for the MFDC heuristic.

6

Conclusions and Future Work

This paper introduces a number of novel adaptive mechanisms, which optimize job checkpointing frequency as a function of task and system properties. The

Providing Fault-Tolerance in Unreliable Grid Systems

461

heuristics are able to modify the checkpointing interval at run-time reacting on dynamic system changes. Furthermore, a scheduling algorithm combining checkpointing and replication techniques for achieving fault-tolerance is introduced. The algorithm can significantly reduce task execution delay in systems with varying load by transparently switching, between both techniques. In the following phase of the research the proposed adaptive checkpointing solutions will be further refined to consider different types of applications, with low and high checkpointing overhead. Also a more robust mechanism will be investigated dealing with temporary unavailability and low response rate of the grid scheduler (that is responsible for managing checkpointing events).

References 1. Chtepen, M., Claeys, F.H.A., Dhoedt, B., De Turck, F., Demeester, P., Vanrolleghem, P.A.: Evaluation of Replication and Rescheduling Heuristics for Grid Systems with Varying Availability. In Proc. of Parallel and Distributed Computing and Systems, Dallas (2006) 2. Oliner, A.J., Sahoo, R.K., Moreira, J.E., Gupta, M.: Performance Implications of Periodic Checkpointing on Large-Scale Cluster Systems. In Proc. of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS’05), Washington (2005) 3. Chtepen, M., Claeys, F.H.A., Dhoedt, B., De Turck, F., Demeester, P., Vanrolleghem, P.A.: Dynamic Scheduling of Computationally Intensive Applications on Unreliable Infrastructures. In Proc. of the 2nd European Modeling and Simulation Symposium, Barcelona, Spain (2006) 4. Vaidya, N.H.: Impact of Checkpoint Latency on Overhead Ratio of a Checkpointing Scheme. IEEE Transactions on Computers, 46-8 (1997) 942–947 5. Wong, K.F., Franklin, M.: Checkpointing in Distributed Systems. Journal of Parallel and Distributed Systems, 35-1 (1996) 67–75 6. Zhang, Y., Squillante, M.S., Sivasubramaniam, A., Sahoo, R.K.: Performance Implications of Failures in Large-Scale Cluster Scheduling. In Proc. of the 10th Workshop on Job Scheduling Strategies for Parallel Processing, New York (2004) 7. Quaglia, F.: Combining Periodic and Probabilistic Checkpointing in Optimistic Simulation. In Proc. of the 13th workshop on Parallel and distributed simulation, Atlanta (1999) 8. Oliner, A.J.: Cooperative Checkpointing for Supercomputing Systems. Master Thesis. Massachusetts Institute of Technology (2005) 9. Li, Y., Mascagni, M.: Improving Performance via Computational Replication on a Large-Scale Computational Grid. In Proc. of the 3st International Symposium on Cluster Computing and the Grid, Washington (2003)

A Machine-Learning Based Load Prediction Approach for Distributed Service-Oriented Applications Jun Wang, Yi Ren, Di Zheng, and Quan-Yuan Wu School of Computer Science, National University of Defence Technology, Changsha, Hunan, China 410073 [email protected]

Abstract. By using middleware, we can satisfy the urgent demands of performance, scalability and availability in current distributed service-oriented applications. However to the complex applications, the load peak may make the system suffer extremely high load and the response time may be decreased for this kind of fluctuate. Therefore, to utilize the services effectively especially when the workloads fluctuate frequently, we should make the system react to the load fluctuate gradually and predictably. Many existing load balancing middleware use the dampening technology to make the load to be predicative. However, distributed systems are inherently difficult to manage and the dampening factor cannot be treated as static and fixed. The dampening factor should be adjusted dynamically according to different load fluctuate. So we have proposed a new technique based on machine learning for adaptive and flexible load prediction mechanism based on our load balancing middleware. Keywords: Service-Oriented Learning, Middleware.

Applications,

Load Prediction,

Machine-

1 Introduction To service many increasing online clients those transmit a large, often busty, number of requests and provide dependable services with high quality constantly, we must make the distributed computing systems more scalable and dependable. Effective load balancing mechanisms must be made use of to distribute the client workload equitably among back-end servers to improve overall responsiveness. Load balancing mechanisms can be provided in any or all of the following layers: • Network-based load balancing: This type of load balancing is provided by IP routers and domain name servers (DNS). Web sites often use network-based load balancing at the network layer (layer 3) and the transport layer (layer 4). • OS-based load balancing: This type of load balancing is provided by distributed operating systems via load sharing, and process migration [1] mechanisms. • Middleware-based load balancing: This type of load balancing is performed in middleware, often on a per-session or a per-request basis. The key enterprise Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 462 – 465, 2007. © Springer-Verlag Berlin Heidelberg 2007

A Machine-Learning Based Load Prediction Approach

463

applications of the moment such as astronavigation, telecommunication, and finance all make use of the middleware based distributed software systems to handle complex distributed applications. There are different realizations of load balancing middleware. For example, stateless distributed applications usually balance the workload with the help of naming service [2]. But this scheme of load balancing just support static non-adaptive load balancing and can’t meet the need of complex distributed applications. For more complex applications, the adaptive load balancing schema [3, 4, 5] is needed to take into account the load condition dynamically and avoid override in some node. Many existing load balancing middleware use the dampening technology to make the load to be predicative. However, distributed systems are inherently difficult to manage and the dampening factor cannot be treated as static and fixed. So, in this paper we design and implement an efficient load prediction method for service-based applications. Clients can learn the best policy by employing reinforcement based learning technique. Reinforcement based learning is an un-supervised learning, in which agents learn through trial and error interactions with environment.

2 Load Prediction Method for Service-Oriented Applications 2.1 Model of the Load Balancing Middleware Our middleware provides load balancing for the service-oriented applications, prevents bottlenecks at the application tier, balances the workload among the different services and enables replication of the service components in a scalable way. The core components are as follows:

Fig. 1. Components of the Load Balancing Middleware

Service Replica Repository: Instances of services need to register with the Service Group. All the references of the groups are stored in the Service Replica Repository. A service group may include several replicas and we can add or remove replicas to the groups. The main purpose of the service group is to provide a view containing simple information about the references to the locations of all replicas registered with group. The uses need not to know where the replica is located.

464

J. Wang et al.

Service Decision Maker: This component assigns the best replica in the group to service the request based on the algorithms configured in our load balancing policy [6]. The service decision maker acts as a proxy between the client and the dynamic services. It enables transparency between them without letting the client knowing about the multiple distributed service replicas. Load Monitor: Load monitor collects load information from every load agent within certain time interval. The load information should be refreshed at a suitable interval so that the information provided is not expired. Load Agent: The purpose of load agent is to provide load information of the hosts it resides when requested by load monitor. As different services might have replicas in the same host, it is hard to presume the percentage of resource is being used by which service at particular moment. Therefore, a general metric is needed to indicate the level of available resources at the machine during particular moment. Resource Allocator: The purpose of this component is to dynamically adjust the resource to achieve a balance load distribution among different services. In fact, we control the resource allocation by managing the replicas of different services. For example, this component makes decisions on some mechanisms such as service replication, services coordination, dynamic adjustment and requests prediction. 2.2 Machine-Learning Based Load Prediction As figure 2 shown, the host will suffer extreme loads at time T1 and T2, and if we computing the load basing on the numbers sampling from these two points we will make wrong load balancing decisions. A useful method is Using control theory technology called dampening, where the system minimizes unpredictable behavior by reacting slowly to changes and waiting for definite trends to minimize over-control decisions. The load will be computed in this way: n e w _ l o a d = m u l t i p l i e r * o l d _ l o a d + (1 − m u l t i p l i e r ) * n e w _ l o a d

(1)

The parameter multiplier is the dampening factor and the value of the multiplier is between 0 and 1.The parameter old_load represents for the former load and the new_load represents for the new sampling load. However, distributed systems are inherently difficult to manage and the dampening factor cannot be treated as static and fixed. The dampening factor should be adjusted dynamically according to different load fluctuate. Therefore, we use the machine-learning based load prediction method where the system minimizes unpredictable behavior by reacting slowly to changes and waiting for definite trends to minimize over-control decisions. The simple

Load

T1

T2

Fig. 2. Affection of the Peak Load

A Machine-Learning Based Load Prediction Approach

465

exponential smoothing method is based on a weighted average of current and past observations, with most weight to the current observation and declining weights to past observations. The formula for exponential moving average is given by equation (2): Where 0 ≤ θ ≤ 1 is know as dynamic dampening factor, L n is the most recent load, ϕ n stores the past history and ϕ n +1 is the predicted value of load.

ϕ n + 1 = θ L n + (1 − θ )ϕ n

(2)

We maintained two exponentially weighted moving averages with different dynamic dampening factors. A slow moving average ( θ → 0 ) is used to produce a smooth, stable estimate. A fast moving average ( θ → 1 ) adapts quickly to changes in work load. The maximum of these two values are used as an account for current load on the service provider.

3 Conclusions To utilize the services effectively especially when the workloads fluctuate frequently, we should make the system react to the load fluctuate gradually and predictably. Distributed systems are inherently difficult to manage and the dampening factor cannot be treated as static and fixed. The dampening factor should be adjusted dynamically according to different load fluctuate. So we have proposed and implemented a new technique based on machine learning for adaptive and flexible load prediction mechanism based on our load balancing middleware.

Acknowledgements This work was funded by the National Grand Fundamental Research 973 Program of China under Grant No.2005cb321804 and the National Natural Science Foundation of China under Grant No.60603063.

References 1. Rajkumar, B.: High Performance Cluster Computing Architecture and Systems, ISBN7.5053-6770-6.2001 2. IONA Technologies, “Orbix 2000.” www.iona-iportal.com/suite/orbix2000.htm. 3. Othman, O'Ryan, C., Schmidt, D. C.: The Design of an Adaptive CORBA Load Balancing Service. IEEE Distributed Systems Online(2001) 4. Othman, O., Schmidt, D. C.: Issues in the design of adaptive middleware load balancing. In: ACM SIGPLAN, ed. Proceedings of the ACM SIGPLAN workshop on Languages, Compilers and Tools for Embedded Systems. New York: ACM Press(2001)205-213 5. Othman, O., O’Ryan, C., Schmidt, D.C.: Strategies for CORBA middleware-based load balancing. IEEE Distributed Systems Online(2001) http://www.computer.org/dsonline 6. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable Object-Oriented Software. Reading: Addison-Wesley(2002)223-325

A Balanced Resource Allocation and Overload Control Infrastructure for the Service Grid Environment Jun Wang, Yi Ren, Di Zheng, and Quan-Yuan Wu School of Computer Science, National University of Defence Technology, Changsha, Hunan, China 410073 [email protected]

Abstract. For the service based Grid applications, the applications may be integrated by using the Grid services across Internet, thus we should balance the load for the applications to enhance the resource’s utility and increase the throughput. To overcome the problem, one effective way is to make use of load balancing. Kinds of load balancing middleware have already been applied successfully in distributed computing. However, they don’t take the services types into consideration and for different services requested by clients the workload would be different out of sight. Furthermore, traditional load balancing middleware uses the fixed and static replica management and uses the load migration to relieve overload. Therefore, we put forward an autonomic replica management infrastructure to support fast response, hot-spot control and balanced resource allocation among services. Corresponding simulation tests are implemented and results indicated that this model and its supplementary mechanisms are suitable to service based Grid applications. Keywords: Web Service, Service based Grid Applications, Load Balancing, Adaptive Resource Allocation, Middleware.

1 Introduction The Grid services [1, 2] conform to the specifications of the Web Service and it provides a new direction for constructing the Grid applications. The applications may be integrated across the Internet by using the Grid services and the distributed Grid services and resources must be scheduled automatically, transparently and efficiently. Therefore, we must balance the load of the diverse resources to improve the utilization of the resources and the throughput of the systems. Currently, load balancing mechanisms can be provided in any or all of the following layers in a distributed system: • Network-based load balancing: This type of load balancing is provided by IP routers and domain name servers (DNS). However, it is somewhat limited by the fact that they do not take into account the content of the client requests. • OS-based load balancing: At the lowest level for the hierarchy, OS-based load balancing is done by distributed operating system in the form of lowest system level scheduling among processors [3, 4]. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 466–473, 2007. © Springer-Verlag Berlin Heidelberg 2007

A Balanced Resource Allocation and Overload Control Infrastructure

467

• Middleware-based load balancing: This type of load balancing is performed in middleware, often on a per-session or a per-request basis. The key enterprise applications of the moment such as astronavigation, telecommunication, and finance all make use of the middleware based distributed software systems to handle complex distributed applications. There are different realizations of load balancing middleware. For example, stateless distributed applications usually balance the workload with the help of naming service [6]. But this scheme of load balancing just support static non-adaptive load balancing and can’t meet the need of complex distributed applications. For more complex applications, the adaptive load balancing schema [7, 8] is needed to take into account the load condition dynamically and avoid override in some node. However, many of the services are not dependable for loose coupling, high distribution in the Grid environment and traditional load balancing middleware pay no attentions to the resource allocation. Therefore, we put forward an autonomic replica management infrastructure based on middleware to support fast response, hot-spot control and balanced resource allocation among different services.

2 Architecture of the Load Balancing Middleware Our load balancing service is a system-level service introduced to the application tier by using IDL interfaces. Figure 1 features the core components in our service as follows:

Fig. 1. Components of the Load Balancing Middleware

Service Replica Repository: Instances of services need to register with the Service Group. All the references of the groups are stored in the Repository. A service group may include several replicas and we can add or remove replicas to the groups. Service Decision Maker: This component assigns the best replica in the group to service the request based on the algorithms configured in our load balancing policy [5]. The service decision maker acts as a proxy between the client and the dynamic services. It enables transparency between them without letting the client knowing about the multiple distributed service replicas. Load Monitor: Load monitor collects load information (such as CPU utilization) from every load agent within certain time interval. The load information should be refreshed at a suitable interval so that the information provided is not expired.

468

J. Wang et al.

Load Agent: The purpose of load agent is to provide load information of the Grid hosts it resides when requested by load monitor. As different services might have replicas in the same host, so a general metric is needed to indicate the level of available resources at the machine during particular moment. Load Prediction: This module use the machine-learning based load prediction method where the system minimizes unpredictable behavior by reacting slowly to changes and waiting for definite trends to minimize over-control decisions. Resource Allocator: The purpose of this component is to dynamically adjust the resource to achieve a balance load distribution among different services. In fact, we control the resource allocation by managing the replicas of different services.

3 Balanced Resource Allocation and Overload Control In traditional load balancing middleware, a load balanced distributed application starts out with a given number of replicas and the requests just are distributed among these replicas to balance the workloads. However, different services may need different resources at all and in some occasions such as 911 or world cup the number of some kinds of requests will increase fast while it may be very small in most of the days. When demand increases dramatically for a certain service, the quality of service (QoS) deteriorates as the service provider fails to meet the demand. Therefore, depending on the availability of the resources, such as CPU load and network bandwidth, the number of replicas may need to grow or decrease over time. That is to say, the replicas should be created or destroyed on-demand. So we use an adaptive replica management approach to adjust the number of the replicas on demand to realize the adaptive resource allocation. 3.1 Load Metrics In traditional distributed systems, the load can be measured by the CPU utilization, the memory utilization, the I/O utilization, the network bandwidth and so on. At the same time, the load may be different for different applications. We take multiple resources and mixed workloads into consideration. The load index of each node is composed of composite usage of different resources including CPU, memory, and I/O which can be calculated with: L

j

=

3



i=1

ai ki2

( 0 ≤ a i ≤ 1,

3



i=1

ai

= 1)

(1)

L j denotes the load of host j and ki denotes the percentage according resource has been exhausted. Furthermore,

ai denotes the weighted value of the certain load

metric and the value can be configured differently for diverse applications. 3.2 On-Demand Creation and Destruction of the Replicas At the beginning of the discussion let us give some definitions firstly. Let H = {h1, h2..., hi } where h j represents the j th node of the system and

A Balanced Resource Allocation and Overload Control Infrastructure

469

let S = {s1, s2..., sl } where sk represents the k th service of the system. Furthermore, let N k represents the number of the replicas of the k th service of the system. So the set of the replicas of the k th service can be denoted by R( S k ) = {S k1 ,...SkN } .At the same time, k

th

th

the host the m replica of the l service is residing in can be denoted by H ( Slm ) whose load at time t is denoted by LH ( Slm ) (t ) .

The first problem is when to create the new replica. In normal conditions, new requests will be dispatched to the fittest replicas. But all the hosts the initial replicas residing in may be in high load and the new requests will cause them to be overload. So we should create new replicas to share the workload. We set the replica_addition threshold to help to make the decisions. For example, to the i th service of the system, if the equation (2) can be true, then new replica will be created. ∀ x ( L H ( S ix ) ( t ) ≥ rep lica_ a dd itio n ) x ∈ R ( S i )

(2)

The second problem is where to create the new replica. As the load metrics we have discussed before we can compute the workloads of the hosts. Furthermore, the hosts may be heterogeneous and the workload of each host is different. In fact, we set the replica_deployment threshold for every host. According to the equation (3), if the workloads of some hosts don’t exceed the threshold the new replica can be created on them. Otherwise no replica will be created because of the host will be overloaded soon and all the system will become unstable for the creation. Therefore the incoming requests should be rejected to prevent failures. ∃ x ( L H ( t ) < r e p lic a _ d e p lo y m e n t ) x ∈ {1, ...i } x

(3)

The third problem is to create what kind of replicas. Because the applications may be composed of different services and the services may all need create new replicas. However, the services may have different importance and we should divide them with different priorities. Therefore, we classify the services as high priority services, medium priority services and low priority services according to the importance of them. The services having different priorities may have different maximum replicas. For example, supposing the number of the hosts is n, then the maximum number of the high priority can be n, the maximum number of the medium priority service can be ⎣ n ⎦ and the maximum number of the low priority service can be ⎣ n ⎦ .These 2

configurations can be revised according to practical needs. Secondly, the Resource Allocator module maintains three different queues having different priorities. Each queue is managed by the FIFO strategy and the replicas of the services having higher priority will be created preferentially. The last problem is the elimination of the replicas. For the coming of the requests may fluctuate. If the number of the requests is small, then monitoring multiple replicas and dispatching the requests among them is not necessary and wasteful. We should eliminate some replicas to make the system to be efficient and bring down the extra overhead. So we set the replica_elimination threshold to control the elimination of the replicas. For example, to the i th service of the system, if the equation (4) can be true, then some certain replica will be eliminated.

470

J. Wang et al. ∃ x ( L H ( S ix ) ( t ) < replica_elim ination ) x ∈ R ( S i )

(4)

The elimination will be performed continuously until the equation (4) becomes false or the number of the replicas becomes one. Furthermore, in some unusual occasions such as 911 and world cup some certain simple services will become hot spot. Therefore, we should adjust the priority of the services according to the number of the incoming requests to avoid overload. The low priority services may have higher priority with the increasing client request and more replicas will be created to response the requests. At the same time, once the number of the requests decreases, the priority of these services will be brought down and the exceeding replicas will be eliminated.

4 Performance Results As depicted in figure 2, our middleware StarLB is running on RedHat Linux 9/Pentium4/256/1.7G. The clients are running on Windows2000/Pentium4/512M/2G. Furthermore, to compare the results easier the Grid hosts are as same as each other and all of the hosts are running on Windows2000/Pentium4/512M/2G. G rid H o st G rid H o st C lie n t 1

...

...

G rid P o rta l S T A R L B

G rid H o st G rid H o st

C lie n t n

Fig. 2. Load Balancing Experiment Testbed S1

S e rv ic e 1

S2

S e rv ic e 2

S3

S e rv ic e 3

S4

S e r v ic e 4

S5

S e rv ic e 5

S6

S e rv ic e 6

H o s t1 S 1 -R 1

H o s t4 S 4 -R 1

H o s t2 S 2 -R 1

H o s t5 S 5 -R 1

H o s t3 S 3 -R 1

H o s t6 S 6 -R 1

Fig. 3. Initial Replicas of different Services

At the beginning of this test, we used the services without the help of the autonomic replica management. Supposing there are six hosts and there are six different services. (This test just be used to analyze and present the mechanisms and more complex tests using many more hosts and services are ignored here.) Among

A Balanced Resource Allocation and Overload Control Infrastructure

471

these services there are two high priority services, two medium priority services and two low priority services. Among the six services the service1 and the service2 are high-priority services, the service3 and the service4 are medium-priority services and the remaining two are low-priority services. All the services have only one initial replica. Each replica resides in a host respectively and response to the client requests. The distribution of the replicas is as depicted in figure 3. Furthermore, we set the low priority service can have at most two replicas, the medium priority service can have three replicas and the high priority service can have six replicas. As depicted in figure 4(a) and figure 4(b), according to our setting when the load index arrived at 85% new replica was created. From the two above broken lines in figure 4(a) we can see with the requests coming new replicas was created in the hosts the low priority services residing in. At the same time, because of the creation of new replicas the response time of the high priority services could be brought down and the throughput of these services was increased efficiently. Furthermore, the workload of all the hosts was balanced efficiently. All the creations are depicted in figure 5 and the services with higher priority may create new replicas more preferentially and have larger maximum replica number. Response time (microsecond)

CPU utilization( %) Load Index (%)

1800 1700 1600 1500 1400 1300 1200 1100 1000 900 800 700 600 500 400

90

系列1 系列2 Host3 系列3 Host4 系列4 Host5 系列5 Host6 系列6

Service1 Service2

80

Service3

70

Host1 Host2

Service4 Service5

60

Service6

50 40

1

2

3

4

5

6

7

8

9

10

Time/10s

1

2

3

4

5

6

7

8

9

10

time/10s

Fig. 4. (a) Response Time with Replica Management. (b)Load Index with Replica Management. H o s t1 S 1 -R 1

S1

S e r v ic e 1

S2

S e r v ic e 2

S3

S e r v ic e 3

S4

S e r v ic e 4

S5

S e r v ic e 5

S6

S e r v ic e 6

H o s t4 S 4 -R 1

H o s t1 S 1 -R 1

H o s t3

H o s t2

S 3 -R 1

S 2 -R 1

H o s t6

H o s t5

(a )

S 6 -R 1

S 5 -R 1

H o s t3

H o s t2

S 3 -R 1

S 2 -R 1

(b ) H o s t4 S 4 -R 1

H o s t6

H o s t5 S 5 -R 1

S 6 -R 1

S 1 -R 2

Fig. 5. Creation and Elimination of the Replicas

S 2 -R 2

472

J. Wang et al.

Then there is still a question to be discussed. That is the elimination of the extra replicas and the elevation of the priority. We deployed all the replicas as the initial state depicted in the figure 3 and made the service6 become hot spot. As the figure 6(a) and the figure 6(b) depicted, adding the number of the requests of the services6 gradually. Then the CPU utilization of the Host6 and the response time of the service6 increased too. According to our setting of the replica_addition threshold, when the Load index arrived at 85% new replica was created. For the load index of the Host5 was lowest, a new replica of the service6 was created in the Host5. By the creation of the new replica, the Load index of the Host6 decreased as well as the response time. At the same time, the response time of the service5 was just affected a little. Response time (microsecond) 1600 1400

Service1

1200

Service2

1000

Service3

800

Service4

600

Service5

400

Service6

200

Time/10s

0 1

2

3

4

5

6

7

8

9

95 90 85 80 75 70 65 60 55 50 45 40

Load Index(%)

Host1 Host2 Host3 Host4 Host5 Host6

1

10

2

3

4

5

6

7

8

9

10

Time/10s

Fig. 6. (a) Response Time with Priority Elevation. (b)Load index with Priority Elevation. S1

H o s t1

S e r v ic e 1

S2

S e r v ic e 2

S3

S e r v ic e 3

S4

S 1 -R 1

S6

(a ) H o s t5

P r io r ity e le v a tio n

H o s t2

S 1 -R 1

S 6 -R 1

S 6 -R 2

S e r v ic e 6 H o s t1

H o s t6

S 5 -R 1

S 4 -R 1

S e r v ic e 5

S5

S 3 -R 1

S 2 -R 1

H o s t4

S e r v ic e 4

H o s t3

H o s t2

H o s t3 S 3 -R 1

S 2 -R 1

H o s t1 S 1 -R 1

H o s t2 S 2 -R 1

H o s t3 S 3 -R 1

S 6 -R 3 (b ) H o s t4 S 4 -R 1

H o s t5

H o s t6

S 5 -R 1

S 6 -R 1

H o s t4 S 4 -R 1

S 6 -R 2 (a )

(c ) H o s t5

H o s t6

S 5 -R 1

S 6 -R 1

S 6 -R 2 (b )

(c )

Fig. 7. Creation and Elimination of the Replicas with Priority Elevation

However, because the requests for the service 6 kept increasing and the Load index arrived at 85% again. As we have discussed before the service6 is low priority service and the maximum number of the replicas is two. So the priority of the service6 should be elevated to allow the creation of the new replicas. Then the new replica was

A Balanced Resource Allocation and Overload Control Infrastructure

473

created in the Host3 and the requests could be distributed with the help of the new replica. At last, when we decreased the requests of the service6 .The Load index decreased and too many replicas were not necessary. So the priority of the service6 should be decreased and unnecessary replicas should be eliminated. As depicted in the figure 6(b), when the Load index was below the replica_elimination threshold the replica in the Host3 was eliminated for the highest Load index among the three replicas. The remaining two replicas were keep dealing with the requests until the Load index shall become higher or lower. All the creation and elimination of the replicas with priority elevation are depicted in the figure 7.

5 Conclusions Kinds of load balancing middleware have already been applied successfully in distributed computing. However, they don’t take the services types into consideration and for different services requested by clients the workload would be different out of sight. Furthermore, traditional load balancing middleware uses the fixed and static replica management and uses the load migration to relieve overload. Therefore, we put forward an autonomic replica management infrastructure based on middleware to support fast response, hot-spot control and balanced resource allocation among different services. Corresponding simulation tests are implemented and their result s indicated that this model and its supplementary mechanisms are suitable to service based Grid applications.

Acknowledgments This work was funded by the National Grand Fundamental Research 973 Program of China under Grant No.2005cb321804 and the National Natural Science Foundation of China under Grant No.60603063.

References 1. FOSTER, I., KESSELMAN, C., NICK, J.: Grid services for distributed system integration [J]. Computer(2002)37 -46 2. http://www.gridforum.org/ogsi-wg/drafts/GS_Spec_draft03_2002-07-17.pdf 3. Chow, R., Johnson, T.: Distributed Operating Systems and Algorithms, Addison Wesley Long, Inc.(1997) 4. Rajkumar, B.: High Performance Cluster Computing Architecture and Systems, ISBN7.5053-6770-6.2001. 5. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable Object-Oriented Software. Reading: Addison-Wesley(2002)223-325. 6. IONA Technologies, “Orbix 2000.” www.iona-iportal.com/suite/orbix2000.htm. 7. Othman, C., O'Ryan, Schmidt, D. C.: The Design of an Adaptive CORBA Load Balancing Service. IEEE Distributed Systems Online, vol. 2, (2001) 8. Othman, O., Schmidt, D.C.: Issues in the design of adaptive middleware load balancing. In: ACM SIGPLAN, ed. Proceedings of the ACM SIGPLAN workshop on Languages, Compilers and Tools for Embedded Systems. New York: ACM Press(2001)205-213.

Recognition and Optimization of Loop-Carried Stream Reusing of Scientific Computing Applications on the Stream Processor Ying Zhang, Gen Li, and Xuejun Yang Institute of Computer, National University of Defense Technology, 410073 Changsha China [email protected]

Abstract. Compared with other stream applications, scientific stream programs are usually constrained by memory access. Loop-carried stream reusing means reusing streams across different iterations and it can improve the locality of SRF greatly. In the paper, we present algorisms to recognize loop-carried stream reusing and give the steps to utilize the optimization after analyzing characteristics of scientific computing applications. Then we perform several representative microbenchmarks and scientific stream programs with and without our optimization on Isim. Simulation results show that stream programs optimized by loop-carried stream reusing can improve the performance of memory-bound scientific stream programs greatly.

1 Introduction Now conventional architecture has been not able to meet the demands of scientific computing[1][2]. In all state-of-the-art architectures, the stream processor[3](as shown in fig. 1) draws scientific researchers’ attentions for its processing computation-intensive applications effectively[4-8]. Compared with other stream applications, scientific computing applications have much more data, more complex data access methods and more strong data dependence. The stream processor has three level memory hierarchies[12] – local register files (LRF) near ALUs exploiting locality in kernels, global stream register files (SRF) exploiting producer-consumer locality between kernels, and streaming memory system exploiting global locality. The bandwidth ratios between three level memory hierarchies are large. In Imagine[9][10], the ratio is 1:13:218. As a result, how to enhance the locality of SRF and LRF and consequently how to reduce the chip-off memory traffics become key issues to improve the performance of scientific stream programs constrained by memory access. Fig. 2 shows a stream flows across three level memory hierarchies during the execution of a stream program. First, the stream is loaded from chip-off memory into SRF and distributed into corresponding buffer. Then it is loaded from SRF to LRF to supply operands to a kernel. During the execution of the kernel, all records participating in kernel and temporary results are saved in LRF. After the kernel is finished, the records are stored back to SRF. If there is producer-consumer locality between this kernel and its later kernel, the stream is saved in SRF. Otherwise, it is stored back to chip-off memory. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 474 – 481, 2007. © Springer-Verlag Berlin Heidelberg 2007

Recognition and Optimization of Loop-Carried Stream

475

Loop-carried stream reusing means reusing streams across different iterations and it can improve the locality of SRF. In the paper, we present algorisms to recognize loop-carried stream reusing and give steps to use our method to optimize stream programs according to the analysis of typical scientific computing applications. We give the recognition algorism to decide what applications can be optimized and the steps how to utilize loop-carried stream reusing to optimize stream organization. Then we perform several representative microbenchmarks and scientific stream programs with and without our optimization on a cycle-accurate stream processor simulator. Simulation results show that the optimization method can improve scientific stream program performance efficiently.

Fig. 1. Block diagram of a stream processor

Fig. 2. Stream flowing across the memory system

2 Loop-Carried Stream Reusing Loop-carried stream reusing is defined as that between neighboring loop iterations of a stream-level program, input or output streams of kernels in the first iteration can be used as input streams of kernels in the second iteration. If input streams are reused, we call it loop-carried input stream reusing. Otherwise, we call it loop-carried output stream reusing. The essential of stream reusing optimization is to enhance the locality of SRF. Correspondingly, input stream reusing can enhance producer-producer locality of SRF while output stream reusing can enhance producer-consumer locality of SRF.

Fig. 3. Example code

Fig. 4. Data trace of array QP references

Then we take code in fig. 3 from stream program MVM as example to depict our methods, where NXD equals to NX+2. In the paper, we let NX and NY equal to 832.

476

Y. Zhang, G. Li, and X. Yang

Fig. 4 shows the data trace of QP(L), QP(L+NXD) and QP(L-NXD) participating in loop2 of fig. 3. QP(1668,2499) is QP(L+NXD) of loop2 when J=1, QP(L) of loop2 when J=2, and QP(L-NXD) of loop2 when J=3. So, stream QP can be reused between different iterations of loop1. If QP(1668,2499) is organized as a stream, it will be in SRF after loop2 with J=1 finishes. Consequently, when loop2 with J=2 or J=3 running, it doesn’t get stream QP(1668,2499) from chip-off memory but SRF.

Fig. 5. A D-level nested loop

2.1 Recognizing Loop-Carried Stream Reusing Fig. 5 shows a generalized perfect nest of D loops. The body of the loop nest reads elements of the m-dimensional array A twice. In the paper, we only consider linear subscription expressions, and the ith dimension subscription expression of the first array A reference is denoted as Fi = ∑ Ci, j * I j +Ci,0 , where Ij is an index variable, 1 ≤ j ≤ D , Ci,j is the coefficient of Ij, 1 ≤ j ≤ D , and Ci,0 is the remaining part of the subscript expression that does not contain any index variables. Correspondingly, the ith dimension subscription expression of the second array A reference is denoted as Gi = ∑ C 'i , j *I j +C 'i ,0 . If in the Pth level loop, the data trace covered by the leftmost Q

dimensions1 of one array A read references in the nest with IP=i is the same to that of the other array A read references in the nest with IP=i+d, where d is a const, and they are different to each other in the same nest, such as the data trace of array QP in loop2 in fig. 3, the two array A read references can be optimized by input stream reusing in respect of loop P and dimension Q. Then we give the algorism of Recognizing Loopcarried Input Stream Reusing. RLISR. Two array references in the same loop body can be optimized by input stream reusing in respect of loop P and dimension Q if: (1) when M = 1 , i.e. array A is a 1-D array, the subscript expressions of two array A references can be written as F1 = ∑ C1, j * I j +C1,0 and G1 = ∑ C '1, j *I j +C '1,0 respectively, and Q=1 now. The coefficients of F1 and G1 should satisfy following formulas: ∀j : ((1 ≤ j ≤ D ) ∧ C1, j = C '1, j )

(a)

C1,0 = C '1,0 ± d * C '1, P

(b)

∀j : ((1 ≤ j ≤ D ) ∧ (C1, j ≥

1

D

∑ C1, j * (U j − L j )) j =1

(c)

In this paper, we assume the sequence of memory access is the leftmost dimension first just like as that in FORTRAN.

Recognition and Optimization of Loop-Carried Stream

477

Formula (1a) and (1b) ensure when the loop indices of outmost P-1 loops are given and the loop body passes the space of innermost D-P loops, the data trace of one array A read reference in the nest with IP=i is the same to that of the other array A read references in the nest with IP=i+d. d is a const and we specify d=1 or 2 according to[13]. When I1, …, IP-1, are given and IP, …, ID vary, formula(1c) restricts the data trace of one array A references in the nest with IP=i from overlapping that of the other in the nest with IP=i +d. This formula ensures the characteristic of stream process, i.e. data as streams is loaded into the stream processor to process and reloaded into SRF in batches after process. For the stream processor, the cost of random access records of streams is very high. (2) when M ≠ 1 , i.e. array A is a multi-dimensional array, the subscript expressions of two array A read references should satisfy following conditions: (d) the Qth dimension subscript expression of one array is gotten by translating the index IP in the dimension subscript expression of the other array by d, i.e. GQ = FQ ( I P ± d ) , and, (e) all subscript expressions of one array A reference are the same with those of the other except the Qth dimension subscript expression, i.e. ∀i ((i ≠ Q) ∧ ( Fi = Gi )) , and, (f) for the two array A reference, the innermost index variable in one subscript expression will not appear in any righter dimension subscript expressions, i.e. ∀i(∋ j(Ci, j ≠ 0 ∧ ∀j' ( j' > j ∧ Ci, j' = 0)) →∀i' (i' > i ∧ Ci', j = 0)) ∧ ∀i(∋ j(C'i, j ≠ 0 ∧ ∀j' ( j' > j ∧ C'i, j' = 0)) →∀i' (i' > i ∧ C'i', j = 0)) .

It can be proved that data access trace of two array references decided by condition (2) satisfies condition (1), and when Uj-Ij is large enough, they are equivalent. The algorism of Recognizing Loop-carried Output Stream Reusing is similar to RLISR except that reusing stream mustn’t change original data dependence. Then we give the RLOSR algorism without detailed specifications. RLOSR. We denote the subscript expressions of read references as Fi and those of write references as Gi. Two array references in loop body can be optimized by output stream reusing in respect of loop P and dimension Q if: (3) when M = 1 , the coefficients of F1 and G1 should satisfy following formulas: ∀j : ((1 ≤ j ≤ D ) ∧ C1, j = C '1, j ) (g) C1,0 = C '1,0 + d * C '1, P

(h)

D

∀j : ((1 ≤ j ≤ D) ∧ (C1, j ≥ ∑ C1, j * (U j − L j )) i =1

(i)

(4) when M ≠ 1 , the subscript expressions of two array A read references should satisfy following formulas: GQ = FQ ( I P + d ) (j) ∀i ((i ≠ Q) ∧ ( Fi = Gi ))

(k)

∀i(∋ j(Ci, j ≠0∧∀j'( j'> j ∧Ci, j' =0))→∀i'(i'>i ∧Ci', j =0))∧∀i(∋ j(C'i, j ≠0∧∀j'( j'> j ∧C'i, j' =0))→∀i'(i'>i ∧C'i', j =0))

(l)

2.2 Optimizing Loop-Carried Stream Reusing Then we give the step of using input stream reusing method to optimize stream organization.

478

Y. Zhang, G. Li, and X. Yang

Step A. Organize different array A references in the innermost D-P loops as stream A1 and A2 according their data traces. Step B. Organize all operations on array A in the innermost D-P loops as a kernel. Step C. Organize all operations in the outmost P loops as stream-level program. When the nest with IP=i of loop P in stream-level program operates on stream A1 and A2, one of them has been loaded into SRF by the former nest, which means that the kernel doesn’t get it from chip-off memory but SRF. From the feature of the stream processor architecture, we can know the time to access chip-off memory is much larger than that to access SRF, so the method of stream reusing can improve stream program performance greatly. The steps to use output stream reusing are analogous to steps above. In stream program MVM unoptimized, we organize different array QP read in loop1 as three streams according their data trace ,and. organize operations in loop1 as a kernel The length of each stream is 832*832. When running, the stream program must load these three streams into SRF, the total length of which is 692224*3, nearly three times of that of array QP. By the stream reusing method above, we organize different array QP read references in loop2 as three streams according their own data trace, organize operations in loop2 as a kernel, and organize operations in loop1 except loop2 as stream-level program. Thus there would be 832*3 streams in the stream program loop1, and the length of each is 832. So in stream program loop1, stream QP(L), QP(L+NXD) and QP(L-NXD) of neighboring iterations can be reused. As a result, the stream program only need load 832 streams with the length of 832 from chip-off memory to SRF, the total length of which is 692224, nearly 1/3 of that of unoptimized program.

3 Experiment We compare the performance of microbenchmarks and several scientific applications optimized and unoptimized by stream reusing. All applications are run on a cycleaccurate simulator for a single-node Imagine stream processor, Isim[9][10]. Table 1 summarizes the test programs used for evaluation. Microbenchmarks listed in the upper half of the table stress particular aspects of loop-carried stream reusing, e.g., if there is an input stream reusing between adjacent loop nests in respect of loop 2 and dimension 2, the benchmark is named P2Q2d1. All microbenchmarks are stream programs of applications in fig. 6 in FORTRAN code. P2Q2d1, P3Q3d1, P3Q3d1O and P3Q3d2 are corresponding stream programs of 6(a), 6(b), 6(c) and

Fig. 6. FORTRAN code of applications to be optimized

Recognition and Optimization of Loop-Carried Stream

479

Table 1. Benchmark programs Name Description P2Q2d1 P=Q=2, d=1, and optimized by input stream reusing. P2Q2d1l2 same application as P2Q2d1 except that we don’t optimize it by stream reusing but organize array references of the innermost 2 loops as streams P2Q2d1l3 same as P2Q2d1l2 except that array references of all 3 loops are organized as streams P3Q3d1 P=Q=3, d=1, and optimized by input stream reusing P3Q3d1O same as P3Q3d1 except that it is optimized by output stream reusing P3Q3d2 same as P3Q3d1 except that d=2 QMR. ab. of QMRCGSTAB, a subspace method to solve large nonsymmetric sparse linear systems[14] whose coefficient array size is 800*800 MVM a subroutine of a hydrodynamics application and computing band matrix multiplication with the size of 832*832 Laplace calculating the central difference of two-dimension array whose size is 256*256

4.E+08 3.E+08

P 2Q2d1 P 2Q2d1l2 P 2Q2d1l3

6.E+08

1.5E+08

2.E+08 1.E+08

1.E+08

1.0E+08

4.E+08

5.0E+07

2.E+08

0.0E+00

0.E+00

0.E+00

0.E+00 64 128 192 256 320 (b) Store traffics

64 128 192 256 320 (a)Load traffics

5.E+07

64 128 192 256 320 (d) Run time

64 128 192 256 320 (c) Load-store traffics

Fig. 7. With the increase of array size the performance of different stream implementations of the application in 6(a) in respect of memory traffics(bytes) and run time(cycles) With

P2 Q 2d P3 1 Q P3 3d1 Q 3d 1O P3 Q 3d 2

0.E+00

(a) Load traffics

1.E+06

6.E+06

2.E+06

5.E+05

3.E+06

1.E+06

0.E+00

0.E+00

0.E+00

(b) Store traffics

(c) Load-store traffics

P2 Q 2d P3 1 Q 3 P3 d1 Q 3d 1 P3 O Q 3d 2

2.E+06

P2 Q 2d P3 1 Q 3 P3 d1 Q 3d 1 P3 O Q 3d 2

Without

P2 Q 2d P3 1 Q P3 3d1 Q 3d 1 P3 O Q 3d 2

4.E+06

(d) Run time

Fig. 8. Performance of P2Q2d1, P3Q3d1, P3Q3d1O and P3Q3d2 with array size of 64 in respect of memory traffics(bytes) and run time(cycles) Without

With

4.E+07

2.E+06

2.E+11

2.E+07

1.E+06

0.E+00

0.E+00

0.E+00

4.E+11

1.4 1.3 1.2 1.1

Store Loadstore (c) Laplace

Fig. 9. Effects of stream reusing on the memory traffics of scientific programs

La pl ac e

1.0 Load

M V M

Load Store Loadstore (b) MVM

Q M R.

Load Store Loadstore (a) QMR.

Fig. 10. Speedup of scientific programs with stream reusing

6(d), which is optimized by loop-carried stream reusing. P2Q2d1l2 and P2Q2d1l3 are corresponding stream programs of 6(a) without optimization. There are 2*N out of 4*N streams that can be reused as N stream in SRF in every microbenchmark except

480

Y. Zhang, G. Li, and X. Yang

P2Q2d1, in which there are 2*N2 out of 4*N2 streams that can be reused as N2 stream in SRF . Scientific applications listed in the lower half of the table are all constrained by memory access. 14994 out of 87467 streams in QMR. can be reused as 4998 streams in SRF, 3 out of 8 streams in MVM can be reused as 1 stream in SRF, and 3 out of 5 streams in Laplace can be reused as 1 stream in SRF. Fig. 7 shows the performance of different stream implementations of the application in 6(a) with the increase of array size. Fig. 7(a) shows chip-off memory load traffics, fig. 7(b) shows store traffics, fig. 7(c) shows the total chip-off memory traffics, and fig. 7(d) shows the run time of these implementations. In fig. 7(a), the load traffics of P2Q2d1 are nearly 2/3 of the other two implementations whatever the array size is. This is because input loop-carried stream reusing optimization finds the loop-carried stream reusing, improves the locality of SRF and consequently reduces the load memory traffics. In fig. 7(b) the store traffics of different implementations are the same because there is only input stream reusing, which has no effect on store traffics. From fig. 7(c), we can see that because loop-carried stream reusing reuses 2 input streams as one stream in SRF, it cut down the total memory traffics obviously. In fig. 7(d), when the array size is 64, the run time of P2Q2d1 is larger than the other two implementations. When the array size is 128, the run time of P2Q2d1 is a little larger than the other two implementations. The reason for above is that when the array size is small, the stream length of P2Q2d1 is much shorter than and the number of streams are larger than the other two implementations. As a result, the overheads to prepare to load streams from chip-off memory to SRF weigh so highly that they can’t be hidden, including the time the host writes SDRs(Stream Descriptor Register) and MARs(Memory Access Register).With the increase of the array size, the run time of P2Q2d1 is smaller and smaller than the other two implementations. This is because with the increase of the stream length, the overheads to load streams into SRF weigh more and more highly and consequently the overheads to prepare loading streams can be hidden well. The memory traffics of P2Q2d1 are the least and consequently the P2Q2d1 program performance is the highest. Fig. 8 shows the performance of P2Q2d1, P3Q3d1, P3Q3d1O and P3Q3d2 with array size of 64. Fig. 8(a) shows chip-off memory load traffics, fig. 8(b) shows store traffics, fig. 8(c) shows the total chip-off memory traffics, and fig. 8(d) shows the run time of them. These applications are representative examples of loop-carried stream reusing. In fig. 8(a), 8(b) and 8(c), chip-off memory load, store and total traffics have similar characteristics as those in fig. 7. In fig. 8(d), the performance of all applications except P2Q2d1 have been improved by stream reusing optimization. The reason for the reduction of P2Q2d1 performance has been given above. The results show that these representative applications optimized by loop-carried stream reusing all get similar performance increase as that in fig. 7. Fig. 9 shows the effects of stream reusing on the memory traffics of scientific programs used in our experiments. Fig. 10 shows the speedup yielded by scientific applications with stream reusing over without. All these applications are optimized by input stream reusing. From results, we can see that because all these applications are constrained by memory access, the improvement of application performance brought by stream reusing is nearly in proportion to the amount of streams that can be reused.

Recognition and Optimization of Loop-Carried Stream

481

4 Conclusion and Future Work In this paper, we give a recognition algorism to decide what applications can be optimized and the steps how to utilize loop-carried stream reusing to optimize stream organization. Several representative microbenchmarks and scientific stream programs with and without our optimization are performed on Isim which is a cycle-accurate stream processor simulator. Simulation results show that the optimization method can improve the performance of scientific stream program constrained by memory access efficiently. In the future, we are devoted to developing more programming optimizations to take advantage of architectural features of the stream processor for scientific computing applications.

References 1. W. A. Wulf, S. A. McKee. Hitting the memory wall: implications of the obvious. Computer Architecture News, 1995. 23(1): 20-24. 2. D. Burger, J. Goodman, A. Kagi. Memory bandwidth limitations of future microprocessors. In Proceedings of the 23rd International Symposium on Computer Architecture, Philadelphia, PA, 1996.78-89. 3. Saman Amarasinghe, William. Stream Architectures. In PACT 2003, September 27, 2003. 4. Merrimac – Stanford Streaming Supercomputer Project, Stanford University, http://merrimac.stanford.edu/ 5. William J. Dally, Patrick Hanrahan, et al., "Merrimac: Supercomputing with Streams", SC2003, November 2003, Phoenix, Arizona. 6. Mattan Erez, Jung Ho Ahn, et al., "Merrimac - Supercomputing with Streams", Proceedings of the 2004 SIGGRAPH GP^2 Workshop on General Purpose Computing on Graphics Processors, June 2004, Los Angeles, California. 7. Wang Guibin, Tang Yuhua, et al., "Application and Study of Scientific Computing on Stream Processor", Advances on Computer Architecture (ACA’06), August 2006, Chengdu, China. 8. Du Jing, Yang Xuejun, et al., "Implementation and Evaluation of Scientific Computing Programs on Imagine", Advances on Computer Architecture (ACA’06), August 2006, Chengdu, China. 9. MScott Rixner, Stream Processor Architecture. Kluwer Academic Publishers. Boston, MA, 2001. 10. Peter Mattson. A Programming System for the Imagine Media Processor. Dept. of Electrical Engineering. Ph.D. thesis, Stanford University, 2002 11. 11.Ola Johnsson, Magnus Stenemo, Zain ul-Abdin. Programming & Implementation of Streaming Applications. Master’s thesis, Computer and Electrical Engineering Halmstad University, 2005. 12. Ujval J. Kapasi, Scott Rixner, et al., Programmable Stream Processor, IEEE Computer, August 2003 13. Goff, G., Kennedy, K., and Tseng, C.-W. 1991. Practical dependence testing. In Proceedings of the SIGPLAN '91 Conference on Programming Language Design and Implementation. ACM, New York. 14. A Quasi-Minimal Residual Variant Of The Bi-Cgstab Algorithm For Nonsymmetric Systems (1994) T. F. Chan, E. Gallopoulos, V. Simoncini, T. Szeto, C. H. TongSIAM Journal on Scientific Computing.

A Scalable Parallel Software Volume Rendering Algorithm for Large-Scale Unstructured Data Kangjian Wangc and Yao Zheng* College of Computer Science, and Center for Engineering and Scientific Computation, Zhejiang University, Hangzhou, 310027, P.R. China [email protected], [email protected]

Abstract. In this paper, we develop a highly accurate parallel software scanned cell projection algorithm (PSSCPA) which is applicable of any classification system. This algorithm could handle both convex and non-convex meshes, and provide maximum flexibilities in applicable types of cells. Compared with previous algorithms using 3D commodity graphics hardware, it introduces no the volume decomposition and rendering artifacts in the resulting images. Finally, high resolution images generated by the algorithm are provided, and the scalability of the algorithm is demonstrated on a PC Cluster with modest parallel resources. Keywords: Parallel volume rendering, cell projection, software volume rendering.

1 Introduction Traditionally, parallel volume rendering algorithms were designed to run on expensive parallel machines like SGI Power Challenge, IBM SP2, or SGI Origin 2000 [1, 2, 3, 4]. Recently, however, the decreasing cost and high availability of commodity PCs and network technologies have enabled researchers to build powerful PC clusters for large-scale computations. Scientists can now afford to use clusters for visualization calculations either for runtime visual monitoring of simulations or post-processing visualization. Therefore, parallel software volume rendering on clusters is becoming a viable solution for visualizing large-scale data sets. We develop a highly accurate Parallel Software Scanned Cell Projection Algorithm (PSSCPA) in this paper. The algorithm employs a standard scan-line algorithm and a partial pre-integration method proposed by Moreland and Angel [5], and thus supports the rendering of data with any classification system. It could handle meshes composed of tetrahedra, bricks, prisms, wedges, and pyramids, or the complex of these types of cells. The PSSCPA runs on the distributed-memory parallel architectures, and uses asynchronous send/receive operations and a multi-buffer method to reduce communication overheads and overlap rendering computations. Moreover, we use a hierarchical spatial data structure and an A-Buffer [6] technique to allow early ray-merging to take place within a local neighborhood. *

Corresponding author.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 482 – 489, 2007. © Springer-Verlag Berlin Heidelberg 2007

A Scalable Parallel Software Volume Rendering Algorithm

483

The remainder of this paper is organized as follows. In the next section we relate our work to previous research. Section 3 describes the PSSCPA. In section 4 we present the experiment results of the PSSCPA. The paper is concluded in section 5, where some proposals for future work are also presented.

2 Related Work Cell projection is a well-known volume rendering technique for unstructured meshes. A scalar field is formed by specifying scalar values at all vertices of a mesh, and then visualized by mapping it to colors and opacities with feasible transfer functions. To efficiently visualize large-scale unstructured grid volume data, parallel processing is one of the best options. Ma [3] presented a parallel ray-casting volume rendering algorithm on distributed memory architectures. This algorithm needs explicit connectivity information for each ray to march from one element to the next, which incurs considerable memory usage and computational overheads. Nieh and Levoy [7] developed a parallel volume rendering system. Their algorithm, however, was tested on a distributed-shared memory architecture, so did the PZSweep algorithm proposed by Farias et al. [8]. Farias et al. [9] soon enhanced the PZSweep routine [8] to configure it on a PC cluster, and strived to find a feasible load balancing strategy applicable for the new architecture. Chen et al. [10] presented a hybrid parallel rendering algorithm on SMP clusters, which make it easier to achieve efficient load balancing. Ma et al. [4] presented a variant algorithm without requirement for connectivity information. Since each tetrahedral cell is rendered independently of other cells, data can be distributed in a more flexible manner. The PSSCPA investigated in this paper is just originated from this algorithm. However, several improvements have been made, as introduced in Section 3.

3 The Parallel Software Scanned Cell Projection Algorithm Fig. 1 shows the parallel volume rendering pipeline performed by the PSSCPA. The exterior faces of the volume and the volume data are distributed in a round robin fashion among processors. The image screen is divided using a simple scan-line interleaving scheme. Then a parallel k-d tree and an A-Buffer are constructed. They are used in the rendering step to optimize the compositing process and to reduce runtime memory consumption. Each processor scan converts its local cells to produce many ray segments, and send them to their final destinations in image space for merging. A multi-buffer scheme is used in conjunction with asynchronous communication operations to reduce overheads and overlap communications of ray segments with rendering computations. When scan conversion and ray-segment mergence are finished, the master node receives competed sub-images from all slavers and then assembles them for display. Our algorithm can handle convex meshes, non-convex meshes, and meshes with disconnected components.

484

K. Wangc and Y. Zheng

I d e n tify in g E x te r io r F a c e s

D a t a D is t r i b u t io n

p ro cesso r i

C o n s t r u c t in g a P a r a l le l k - d T r e e

C r e a t in g a n A - B u f f e r c o m m u n i c a t io n

c o m m u n ic a t io n

S c a n C o n v e r sio n

R a y seg m en t m erg en ce

I m a g e C o m p o s i t io n a n d D i s p la y

Fig. 1. The parallel volume rendering pipeline

3.1 Data Distribution The ideal goal of a data distribution scheme is that each processor incurs the same computational load and the same amount of memory usage. However, it is prevented being obtained by several factors. First, there are some computational costs to scan convert a cell. Variations in the number of cells assigned to each processor will produce variations in workloads. Second, cells come in different sizes and shapes. The difference in size can be as large as several orders of magnitude due to the adaptive nature of the mesh. As a result, the projected image area of a cell can vary dramatically, which produces similar variations in scan conversion costs. Finally, the projected area of a cell also depends on the viewing direction. Generally, nearby cells in object space are often similar in size, so that grouping them together exacerbates load imbalances, making it very difficult to obtain satisfactory result. We have therefore chosen to take the round-robin scheme, dispersing connected cells as widely as possible among the processors. Thus with sufficient enough cells, the computational requirements for each processor tend to average out, producing an approximate load balance. The approach also satisfies our requirement for flexibility, since the data distribution can be computed trivially for any number of processors, without the need for an expensive preprocessing time. We also need to evenly distribute the pixel-oriented ray-merging operations. Local variations in cell sizes within the mesh lead directly to variations in depth complexity in image space. Therefore we need an image partitioning strategy to disperse the raymerging operations as well. Scan-line interleaving, which assigns successive image scan-lines to processors in the round-robin fashion, generally works well as long as the image’s vertical resolution is several times larger than the number of processors. In our current implementation, we use this strategy.

A Scalable Parallel Software Volume Rendering Algorithm

485

3.2 Parallel k-d Tree Our round-robin data distribution scheme completely destroys the spatial coherence among neighboring mesh cells, making an unstructured dataset even more irregular. We would like to restore some ordering so that the rendering step may be performed more efficiently. We are to have all processors render the cells in the same neighborhood at about the same time. Ray segments generated for a particular region will consequently arrive at their image-space destinations within a relatively short window of time, allowing them to be merged early. This early merging reduces the length of the ray-segment list maintained by each processor, which benefits the rendering process in two ways: first, a shorter list reduces the cost of inserting a ray segment in its proper position within the list; and second, the memory needed to store unmerged ray segments is reduced. To provide the desired ordering, a parallel k-d tree should be constructed cooperatively so that the resulting spatial partitioning is exactly the same on each processor. After the data cells are initially distributed, all processors participate in a synchronized parallel partitioning process. A detailed description of the algorithm has been given by Ma et al. [4]. 3.3 Creating an A-Buffer For a convex mesh, a parallel k-d tree should be constructed for ray-segments to be merged early. However, for a non-convex mesh or a mesh with disconnected components, only the k-d tree is not sufficient. The ray-gaps, i.e. a segment of a ray between a point on an exterior face of a mesh where the ray leaves the mesh, and another such point where the ray reenters a mesh, will clag ray-segments to be merged early. Our approach to this problem is to add an assistant A-Buffer. We can identify exterior faces of a mesh and evenly distribute them to each processor. Then the exterior faces will scan-converted, and ray-face intersections are sent to their image-space destinations. Each processor saves the ray-face intersections received from other processor in an A-Buffer type of data structure along each ray. An A-Buffer is created, implemented as an array of pixel (PIX) lists, one per pixel in image space. Fig. 2 shows an A-Buffer on a slice across a scan line through an unstructured mesh with disconnected components. A PIX list consists of a series of PIX list entry records (PIX entries), as described below. As each pixel p of an exterior face f is enumerated by the scan conversion, a new PIX entry is created containing the distance z from the screen to p , a pointer to f , and a next pointer for the PIX list. The PIX entry is then inserted into the appropriate PIX list in the A-Buffer, in the order of increasing z . At each pixel location, we maintain a linked list of ray segments, which are merged to form the final pixel value. The ray segments in each linked list assisted by a PIX list are allowed to be merged early in a non-convex mesh or a mesh with disconnected components.

486

K. Wangc and Y. Zheng

PIX list

screen pixel Z

PIX entry

exterior face

A-Buffer

Fig. 2. Diagram of an A-Buffer on a 2D slice

3.4 Task Management with Asynchronous Communication To reduce computational overheads and improve parallel efficiencies, we adopt an asynchronous communication strategy first suggested by Ma et al. [4].

segments to other processors

buffer buffer

Cell Rendering Task polling for incoming ray segments

Task Switching

scan-convert cells

buffer Local Segments

Image Compositing Task

segments from other processors

ray-segment list

merge ray segments

PIX list

Fig. 3. Task management with asynchronous communications

This scheme allows us to overlap computation and communication, which hides data transfer overheads and spreads the communication load over time. During the course of rendering, there are two main tasks to be performed: scan conversion and image composition. High efficiency is obtained if we can keep all processors busy within either of these two tasks. We employ a polling strategy to interleave the two tasks, and thus achieve a good performance. Fig. 3 illustrates at a high level the management of the two tasks and the accompanying communications. Each processor starts by scan converting one more data cells. Periodically the processor checks to see

A Scalable Parallel Software Volume Rendering Algorithm

487

if incoming ray segments are available; if so, it switches to the merging task, sorting and merging incoming rays until no more input is pending. Because of the large number of ray segments generated, the overheads for communicating each of them individually would be prohibitive in most parallel machines. Instead, it is better to buffer them locally and send many ray segments together in one operation. This strategy is even more effective when multi-buffers are provided for each destination. When a send operation is pending for a full buffer, the scan conversion process can be placing ray segments in other buffers. 3.5 Image Composition Since we divide the image space using a simple scan-line interleaving scheme, the image composition step is very simple. Tiles do not overlap, and pixels are generated independently. So the tiles rendered by each processor correspond directly to subimages, and there is no need for depth composition. When the master processor receives from other processors the sub-images rendered, it simply pastes it on the final image.

4 Experimental Results We implemented our PSSCPA in the C++ language using MPI message passing for inter-processor communication. All the experiments presented subsequently are conducted on a Dawning PC Cluster (Intel CPUs: 48*2.4GHz; Memory: 48GB; Network: 1-Gbit Fast Ethernet) at the Center for Engineering and Scientific Computation (CESC), Zhejiang University (ZJU). We have used two three different datasets in our experiments: G1 and FD. G1 represents the stress distribution on a three-dimensional gear. FD represents the evolution of the structure of interface in three-dimensional Rayleigh-Taylor instability. Both G1 and FD are unstructured grids composed of tetrahedral cells. G1 is a

Fig. 4. Volume rendering of G1

Fig. 5. Volume rendering of FD

488

K. Wangc and Y. Zheng

350 rendering time(seconds)

rendering time(seconds)

60 58.3 50 40 30.7

30 20

16

10

8.7 5.4

0

1

2

8

4

250 200

100

95.7

50

53.2 31.5

0

1

2

4

8

16

17.3 32

the number of processors

the number of processors Fig. 6. The rendering time of G1

Fig. 7. The rendering time of FD 20

FD

178.1

150

3.1 16 32

20

345.2

300

1

19

Parallel Efficiency

G1

15 Speedup

Frame 002 ⏐ 07 Feb 2007 ⏐ No Data Set

10

5

01

2

4

8

16

32

the number of processors Fig. 8. Speedup

0.8

Frame 002 ⏐ 07 Feb 2007 ⏐ No Data Set

FD G1

0.63

0.6

0.4 1

0.59

2

4

8

16

32

the number of processors Fig. 9. Parallel efficiency

non-convex mesh with 0.5M cells; FD is a convex mesh with 3M cells. The image size in our experiments is 512*512. Figs. 4 and 5 show two volume-rendered views. Figs.6 and 7 plot the rendering time versus the number of processors. With 32 processors involved we can render 0.5M tetrahedral cells in 3.1 seconds per frame. Figs.8 and 9 show the speedups and parallel efficiencies obtained as the number of processors varies from 1 to 32, respectively While 32 processors are involved, we achieve a speedup of 20, and a parallel efficiency of 63%, for FD.

5 Conclusions and Future work In this paper, by combining a k-d tree and an A-Buffer with an asynchronous communication strategy, we have developed a volume renderer for unstructured meshes which employs inexpensive static load balancing to achieve good performance. Because the partial pre-integration method is adopted in the volume renderer, our system

A Scalable Parallel Software Volume Rendering Algorithm

489

supports the rendering of data with any classification system. By employing the ABuffer technique, our system also can handle convex meshes, non-convex meshes, or meshes with disconnected components. In the future, we will extend the PSSCPA to the problem of rendering time-varying data. Acknowledgements. The authors would like to thank the National Natural Science Foundation of China, for the National Science Fund for Distinguished Young Scholars under grant No.60225009 and the Major Program of the National Natural Science Foundation of China under Grant No.90405003. The first author is grateful to the simulation data provided by Jianfeng Zou, the constructive discussions with Dibin Zhou, and the valuable suggestions from Jianjun Chen, Lijun Xie, Jian Deng.

References 1. C. Hofsetz and K.-L. Ma.: Multi-threaded rendering unstructured-grid volume data on the SGI origin 2000. In Third Eurographics Workshop on Parallel Graphics and Visualization, (2000) 2. L. Hong and A. Kaufman.: Accelerated ray-casting for curvilinear volumes IEEE Visualization’98, October (1998) 247–254. 3. K.-L. Ma.: Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures. IEEE Parallel Rendering Symposium, October (1995) 23–30 4. K.-L. Ma and T. Crockett.: A scalable parallel cell-projection volume rendering algorithm for three-dimensional unstructured data. IEEE Parallel Rendering Symposium, November (1997) 95–104 5. Moreland, K. Angel, E.: A fast high accuracy volume renderer for unstructured data. In Proceedings of IEEE Symposium on Volume Visualization and Graphics 2004, October (2004)9-16 6. L. Carpenter.: The A-buffer, an antialiased hidden surface method. In Computer Graphics Proc., SIGGRAPH’84, July (1984) 103-108 7. J. Nieh and M. Levoy.: Volume rendering on scalable shared-memory mimd architectures. In 1992 Workshop on Volume Visualization Proceedings, October (1992) 17–24 8. R. Farias, and C. Silva.: Parallelizing the ZSWEEP Algorithm for Distributed-Shared Memory Architectures. In International Workshop on Volume Graphics, October (2001) 91–99 9. R. Farias, C. Bentes, A. Coelho, S. Guedes, L. Goncalves.: Work distribution for parallel ZSweep algorithm. In XVI Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'03), (2003) 107- 114 10. L. Chen, I. Fujishiro, and K. Nakajima.: Parallel performance optimization of large-scale unstructured data visualization for the earth simulator. In Proceedings of the Fourth Eurographics Workshop on Parallel Graphics and Visualization, (2002) 133-140

Geometry-Driven Nonlinear Equation with an Accelerating Coupled Scheme for Image Enhancement* Shujun Fu1,2,**, Qiuqi Ruan2, Chengpo Mu3, and Wenqia Wang1 1

School of Mathematics and System Sciences, Shandong University, Jinan, 250100, China 2 Institute of Information Science, Beijing Jiaotong University, Beijing, 100044, China 3 School of Aerospace Science and Technology, Beijing Institute of Technology, Beijing, 100081, China ** [email protected] Abstract. In this paper, a geometry-driven nonlinear shock-diffusion equation is presented for image denoising and edge sharpening. An image is divided into three-type different regions according to image features: edges, textures and details, and flat areas. For edges, a shock-type backward diffusion is performed in the gradient direction to the isophote line (edge), incorporating a forward diffusion in the isophote line direction; while for textures and details, a soft backward diffusion is done to enhance image features preserving a natural transition. Moreover, an isotropic diffusion is used to smooth flat areas simultaneously. Finally, a shock capturing scheme with a special limiter function is developed to speed the process with numerical stability. Experiments on real images show that this method produces better visual results of the enhanced images than some related equations.

1 Introduction Main information of an image resides in such features as its edges, local details and textures. Image features are not only very important to the visual quality of the image, but also are significant to image post processing tasks, for example, image segmentation, image recognition and image comprehension, etc. Among image features, edges are the most general and important, which partition different objectives in an image. Because of some limitations of imaging process, however, edges may not be sharp in images. In addition to noise, both small intensity difference across edge and big edge width would result in a weak and blurry edge [1]. In the past decades there has been a growing amount of research concerning partial differential equations (PDEs) in image enhancement, such as anisotropic diffusion filters [2-5] for edge preserving noise removal, and shock filters [6-9] for edge sharpening. A great deal of successful applications of nonlinear evolving PDEs in “low level” image *

This work was supported by the natural science fund of Shandong province, P.R. China (No. Y2006G08); the researcher fund for the special project of Beijing Jiaotong University, P.R. China (No. 48109); the open project of the National Laboratory of Pattern Recognition at the Institute of Automation of the Chinese Academy of Sciences, P.R. China; the general program project of School of Mathematics and System Sciences of Shandong University, P.R. China (No. 306002).

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 490–496, 2007. © Springer-Verlag Berlin Heidelberg 2007

Geometry-Driven Nonlinear Equation with an Accelerating Coupled Scheme

491

processing can mainly be attributed to their two basic characteristics: “to be local” and “to be iterative”. The word “differential” means that an algorithm is of local processing, while the word “evolving” means that it is of iterative one when numerically implemented. One of most influential work in using partial differential equations (PDEs) in image processing is the anisotropic diffusion (AD) filter, which was proposed by P. Perona and J. Malik [3] for image denoising, enhancement, sharpening, etc. The scalar diffusivity is chosen as a non-increasing function to govern the behaviour of the diffusion process. Different from the nonlinear parabolic diffusion process, L. Alvarez and L. Mazorra [7] proposed an anisotropic diffusion with shock filter (ADSF) equation by adding a hyperbolic equation, called shock Filter which was introduced by S.J. Osher and L.I. Rudin [6], for noise elimination and edge sharpening. In image enhancement and sharpening, it is crucial to preserve and even enhance image features when one removes image noise and sharpens edges at the same time. Therefore, image enhancement is composed of two steps: features detection and the processings by corresponding tactic according to different features. In this paper, incorporating anisotropic diffusion with shock filter, we present a geometry-driven nonlinear shock-diffusion equation to remove image noise, and to sharpen edges by reducing their width simultaneously. An image comprises regions with different features. Utilizing the techniques of differential geometry, we partition local structures and features of image into flat areas, edges, details such as corners, junctions and fine lines, and textures. These structures should be treated differently to obtain a better result in an image processing task. In our algorithm, for edges between different objects, a shock-type backward diffusion is performed in the gradient direction to the isophote line (edge), incorporating a forward diffusion in the isophote line direction. For textures and details, shock filters with the sign function enhance image features in a binary decision process, which produce unfortunately a false piecewise constant result. To overcome this drawback, we use a hyperbolic tangent function to control softly changes of gray levels of the image. As a result, a soft shock-type backward diffusion is introduced to enhance these features while preserving a natural transition in these areas. Finally, an isotropic diffusion is used to smooth flat areas simultaneously. In order to solve effectively the nonlinear equation to obtain discontinuous solution with numerical stability, after we have discussed the difficulty of the numerical implementation to this type equation, a shock capturing scheme is developed with a special limiter function to speed the process. This paper is organized as follows. In section 2, some related equations are introduced for enhancing images: anisotropic diffusions and shock filters. Then, we propose a geometry-driven shock-diffusion equation. In section 3, we implement the proposed method and test it on real images. Conclusions are presented in section 4

2 Geometry-Driven Shock-Diffusion Equation 2.1 Differentials of a Typical Ramp Edge and Edge Sharpening We first analyze differential properties of a typical ramp edge. In Fig.1 (a), a denotes the profile of a ramp edge, whose center point is o, and, b and c denote its first and

492

S. Fu et al.

second differential curves respectively. It is evident that b increases in value from 0 gradually, reaches its maximum at o, and then decreases to 0; while c changes its sign at o, from positive to negative in value. Here we control changes of gray level beside the edge center o. More precisely, we reduce gray levels of pixels on the left of o (whose second derivatives are positive), while increase those on the right of o (whose second derivatives are negative), by which the edge can be sharpened by reducing its width (see Fig.1 (b)). Shock-type diffusions later are just based on above analysis.

(a)

(b)

Fig. 1. Differentials of a typical ramp edge and edge Sharpening. (a) Differentials of 1D typical ramp edge a, with center o, and the first and second differentials b, c respectively; (b) Edge sharpening process (the solid line), compared with original edge (the broken line).

2.2 Local Differential Structure of Image Consider image as a real function u ( x, y ) on a 2D rectangular domain Ω , and image edge is isophote line (level set line), along which the image intensity is constant. Image gradient is a vector, uN = ∇u = (u x , u y ) . If ∇u ≠ 0 , then a local coordinates system can be defined at a point o:

K K N= ∇u ∇u , T= ∇u ⊥ ∇u ⊥

where ∇u ⊥ = ( −u y , u x ) . The Hessian matrix of image function u ( x, y ) is :

⎛ u xx u xy ⎞ Hu = ⎜ ⎟ ⎝ u xy u yy ⎠ Τ For two vectors X and Y , we define HK u (X,Y K ) = X H uY . Thus, we have the second directional derivatives in the directions N and T : K K 2 uNN =H u (N ,N ) = (u x2u xx + u y2u yy + 2u x u y u xy ) ∇u K K

uTT =H u (T ,T ) = (u x2u yy + u 2y u xx − 2u x u y u xy ) ∇u By calculating, the curvature of isophote line at a point o is : K 3 k = div(N) = uTT ∇u = (u x2u yy + u y2u xx − 2u xu y u xy ) ∇u 2 where div is a divergence operator. 2

2.3 The Geometry-Driven Shock-Diffusion Equation An image comprises regions with different features, such as edges, textures and details, and flat areas, which should be treated differently to obtain a better result in an image

Geometry-Driven Nonlinear Equation with an Accelerating Coupled Scheme

493

processing task. We divide an image into three-type regions by its smoothed gradient magnitude. For edges between different objects, a shock-type backward diffusion is performed in the gradient direction, incorporating a forward diffusion in the isophote line. For textures and details, in equations (3) and (4), to enhance an image using the sign function sign(x) is a binary decision process, which is a hard partition without middle transition. Unfortunately, the obtained result is a false piecewise constant image in some areas producing bad visual quality. We notice that the change of texture and detail is gradual in these areas. In order to approach this change, we use a hyperbolic tangent membership function th(x) to guarantee a natural smooth transition in these areas, by controlling softly changes of gray levels of the image. As a result, a soft shock-type backward diffusion is introduced to enhance these features. Finally, an isotropic diffusion is used to smooth flat areas simultaneously. Thus, incorporating shock filter with anisotropic diffusion, we develop a nonlinear geometry-driven shock-diffusion equation (GSE) process to reduce noise, and to sharpen edges while preserving image features simultaneously: Let ( x, y ) ∈ Ω ⊂ R 2 , and t ∈ [ 0, + ∞) , a multi-scale image u ( x, y, t ): Ω × [ 0, + ∞) → R ,

⎧ uG = Gσ ∗ u ⎪ (1) ⎨ ∂u = + − ( )sign(( ) ) c u c u w u u u N NN T TT NN N G NN ⎪⎩ ∂t with Neumann boundary condition, where the parameters are chosen as follows according to different image regions:

w(uNN )

cN

cT

(uG ) N > T1 T2 < (uG ) N ≤ T1

0

1 (1 + l1 u )

1

0

1 (1 + l1 u )

th(l2 uNN )

else

1

1

0

2 TT 2 TT

where Gσ is defined in previous section, c N and cT are the normal and tangent flow control coefficients respectively. The tangent flow control coefficient is used to prevent excess smoothness to smaller details; l2 is a parameter to control the gradient of the membership function th(x); T1 and T2 are two thresholds; l1 and l2 are constants. 3. Numerical Implementation and Experimental Results

3 Numerical Implementation and Experimental Results 3.1 A Shock Capturing Scheme Nonlinear convection-diffusion evolution equation is a very important model in the fluid dynamics, which can be used to depict transmission processes of momentum, energy and mass of fluid. Because of its hyperbolic characteristic, the solution to the convection-diffusion equation often has discontinuity even if its initial condition is very smooth. Mathematically only weak solution can be obtained here. If a weak solution

494

S. Fu et al.

satisfies the entropic increase principle for an adiabatic irreversible system, then it is called a shock wave. When one solves numerically a convection-diffusion equation using a difference scheme, he may find some annoying problems in numerical simulation, such as instability, over smoothing, spurious oscillation or wave shift of a scheme. The reason for above is that, despite the original equation are deduced according to some physical conversation laws, its discrete equation may deviate from these laws, which can bring about numerical dissipation, numerical dispersion and group velocity of wave packets effects in numerical solutions specially for the hyperbolic term. Therefore, the hyperbolic term must be discretized carefully so that the flow of small scale and shock waves can be captured accurately. Besides of satisfying consistence and stability, a good numerical scheme also need to capture shock waves. One method to capture shock waves is to add artificial viscosity term to the difference scheme for controlling and limiting numerical fluctuations near shock waves. But by this method it is inconvenient to adjust free parameters for different tasks, and the resolution of shocks can also be affected. Another method is to try to stop from numerical fluctuations before them appear, which is based on the TVD (Total Variation Diminishing) and nonlinear limiters. Their main idea is to use a limiter function to control the change of the numerical solution by a nonlinear way, and the corresponding schemes satisfy the TVD condition and eliminate above disadvantage effects, which guarantee of capturing shock waves with a high resolution. In a word, when solving numerically a nonlinear convection-diffusion equation like (1) using a difference scheme, the hyperbolic term must be discretized carefully because discontinuity solutions, numerical instability and spurious oscillation may appear. Shock capturing methods with high resolution are effective tools. For more details, we refer the reader to the book [10]. Here, we develop a speeding scheme by using a proper limiter function. An explicit Euler method with central difference scheme is used to approximate equation (1) except the gradient term uN . Below we detail a numerical approach to it. On the image grid, the approximate solution is to satisfy:

uijn ≈ u (ih, jh, nΔt ), i, j , n ∈ Z +

(2)



where h and t are the spatial and temporal step respectively. Let h = 1, δ +uijn and δ −uijn are forward and backward difference schemes of uijn respectively. A limiter function MS is used to approximate the gradient term:

uN = (MS (δ x+uijn , δ x−uijn ))2 + (MS (δ y+uijn , δ y−uijn ))2

(3)

where

⎧ x, ⎪ ⎪ y, MS ( x, y ) = ⎨ ⎪ x, ⎪0, ⎩

x< y x > y x = y and xy > 0 x = y and xy ≤ 0

(4)

Geometry-Driven Nonlinear Equation with an Accelerating Coupled Scheme

495

The MS function bears fewer 0 in value than the minmod function does in the x-y plane, which also make the scheme satisfy the numerical stability. Because the gradient term represents the transport speed of the scheme, the MS function makes our scheme evolve faster with a bigger transport speed than those with the minmod function. In [8], other than above flux limitation technique, a fidelity term (u − u0 ) is used to carry out the stabilization task, and they also displayed that the SNRs of results tend towards 0 if af = 0 . However, this is not enough to eliminate overshoots, and this term also affect its performance. 3.2 The Coupled Iteration Based on preceding discussion, when implementing iteratively equation (1), we find that the shock and diffusion forces will cancel mutually in a single formula. We split equation (1) into two formulas and propose the following coupled scheme by iterating with time steps:

⎧v 0 = u 0 , uG = Gσ ∗ u ⎪ n +1 n n n ⎨v = u + Δt ( − w(uNN )sign((uG ) NN ) uN ) ⎪u n +1 = v n +1 + Δt (c v n +1 + c v n +1 ) N NN T TT ⎩

(5)

where Δt is the time step, u 0 is an original image. By computing iteratively in the order of u 0 → v 0 → v1 → u 1 → v 2 → u 2 → " , we obtain the enhanced image after some steps. 3.3 Experiments We present results obtained by using our scheme (5), and compare its performance with those of above related methods, where the parameters are selected such that best results are obtained for all methods. We compare performances of related methods on the blurred Cameraman image (Gaussian blur, σ =2.5) with added high level noise (SNR=14dB). In this case, weaker features are smeared by big noise in the image, which are difficult to be restored completely. As it can be seen, although the AD method denoises the image well specially in the smoother segments, it produces the blurry image with unsharp edges, whose ability to sharpen edges is limited, because of its poor sharpening process with the improper diffusion coefficient along the gradient direction. Moreover, with the diffusion coefficient in inverse proportion to the image gradient magnitude along the tangent direction, it does not diffuse fully in this direction and presents rough contours. For the ADSF method, though it sharpens edges very well, in a binary decision process they yield the false piecewise constant images, which look unnatural with a discontinueous transition in the homogenous areas. Further, it cannot reduce noise well only by a single directional diffusion in the smoother regions. The best visual quality is obtained by enhancing the image using GSE, which enhances most features of the image with a natural transition in the homogenous areas, and produces pleasing sharp edges and smooth contours while denoising the image effectively.

496

S. Fu et al.

Finally, we discuss the performances of these methods in smoothing image contours on bigger gradients in the tangent direction of edges. As we explain above, image contours obtained by AD are not smooth with blurry edges in the gradient direction. The results obtained using ADSF and GSE respectively all present smooth contours in the tangent direction.

4 Conclusions This paper deals with image enhancement for noisy blurry images. By reducing the width of edges, a geometry-driven nonlinear shock-diffusion equation is proposed to remove noise and to sharpen edges. Our model performs a powerful process to noise blurry images, by which we not only can remove noise and sharpen edges effectively, but also can smooth image contours even in the presence of high level noise. Enhancing image features such as edges, textures and details with a natural transition in interior areas, this method produces better visual quality than some relative equations.

References 1. Castleman, K.R.: Digital Image Processing, Prentice Hall (1995). 2. Aubert, G., Kornprobst, P.: Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations, vol.147 of Applied Mathematical Sciences, Springer-Verlag (2001). 3. Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Machine Intell., 12(7)(1990) 629-639. 4. Nitzberg, M., Shiota, T.: Nonlinear image filtering with edge and corner enhancement. IEEE Transactions on PAMI, 14(8)(1992) 826-833. 5. You, Y.L., Xu, W., Tannenbaum, A., Kaveh, M.: Behavioral analysis of anisotropic diffusion in image processing. IEEE Trans. on Image Processing, 5(11)(1996) 1539-1553. 6. Osher, S.J., Rudin, L.I.: Feature-oriented image enhancement using shock filters. SIAM J. Numer. Anal., 27(1990) 919-940. 7. Alvarez, L., Mazorra, L.: Signal and image restoration using shock filters and anisotropic diffusion. SIAM J. Numer. Anal., 31(2)(1994) 590-605. 8. Kornprobst, P., Deriche, R., Aubert, G.: Image coupling, restoration and enhancement via PDE’s. IEEE ICIP, 2(1997) 458-461. 9. Gilboa, G., Sochen, N., Zeevi, Y.Y.: Image Enhancement and denoising by complex diffusion processes. IEEE Transactions on PAMI, 26(8)( 2004) 1020-1036. 10. Liu, R.X., Shu, Q.W.: Some new methods in Computing Fluid Dynamics, Science Press of China, Beijing (2004).

A Graph Clustering Algorithm Based on Minimum and Normalized Cut Jiabing Wang1, Hong Peng1, Jingsong Hu1, and Chuangxin Yang1,2 1

School of Computer Science and Engineering, South China University of Technology Guangzhou 510641, China 2 Guangdong University of Commerce, Guangzhou 510320, China {jbwang, mahpeng, cshjs}@scut.edu.cn

Abstract. Clustering is the unsupervised classification of patterns into groups. In this paper, a clustering algorithm for weighted similarity graph is proposed based on minimum and normalized cut. The minimum cut is used as the stopping condition of the recursive algorithm, and the normalized cut is used to partition a graph into two subgraphs. The algorithm has the advantage of many existing algorithms: nonparametric clustering method, low polynomial complexity, and the provable properties. The algorithm is applied to image segmentation; the provable properties together with experimental results demonstrate that the algorithm performs well. Keywords: graph segmentation.

clustering,

minimum

cut,

normalized

cut,

image

1 Introduction Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters) [1-2]. It groups a set of data in a way that maximizes the similarity within clusters and minimizes the similarity between two different clusters. Due to its wide applicability, the clustering problem has been addressed in many contexts; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. In the past decade, one of the most active research areas of data clustering methods has been spectral graph partition, e.g. [3-7], because of the following advantages: does not need to give the number of clusters beforehand; low polynomial computational complexity, etc. In spectral graph partition, the original clustering problem is first transformed to a graph model; then, the graph is partitioned into subgraphs using a linear algebraic approach. Minimum cut in similarity graphs were used by Wu and Leahy [8], Hartuv and Shamir [9]. The minimum cut often causes an unbalanced partition; it may cut a portion of a graph with a small number of vertices [3-4]. In the context of graph clustering, this is, in general, not desirable. To avoid partitioning out a small part of a graph by using edge-cut alone, many graph partitioning criteria were proposed, such as ratio cut [10], normalized cut [4], min-max cut [11], etc. Soundararajan and Sarkar Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 497 – 504, 2007. © Springer-Verlag Berlin Heidelberg 2007

498

J. Wang et al.

[12] have made an in-depth research to evaluate the following partitioning criteria: minimum cut, average cut, and normalized cut. Normalized cut proposed by Shi and Malik [4] is a spectral clustering algorithm and has been successfully applied to many domains [13-18], especially in image segmentation. However, when applying normalized cut to recursively partitioning data into clusters, a real number parameter⎯the stopping condition⎯must be given beforehand [4]. To our knowledge, there are no theoretic results about how to select the parameter. If the parameter is inappropriate, the clustering result is very bad (see an example as shown in Fig. 1 and Fig. 2 in section 2). In this paper, we propose a clustering algorithm based on minimum and normalized cut. By a novel definition of a cluster for weighted similarity graph, the algorithm does not need to give the stopping condition beforehand and holds many good properties: low polynomial complexity, the provable properties, and automatically determining the number of clusters in the process of clustering. The rest of the paper is organized as follows. In section 2, we give some basic definitions, a brief review of normalized cut, and the description of our algorithm. In section 3, we prove some properties of the algorithm. In section 4, we apply the algorithm to image segmentation and give some preliminary results. This paper concludes with some comments.

2 The MAN-C Algorithm A weighted, undirected graph G = (V, E, W) consists of a set V of vertexes, a set E of edges, and a weight matrix W. The positive weight wij on an edge connecting two nodes i and j denotes the similarity between i and j. For a weighted, undirected graph G, we also use n to denote the number of vertexes of G, m the number of edges of G, T the sum of weights of all edges of G. The distance d(u,v) between vertices u and v in G is the minimum length of a path joining them, if such path exists; otherwise d(u,v) = ∞ (the length of a path is the number of edges in it). The degree of vertex v in a graph, denoted deg(v), is the number of edges incident on it. We say that A and B partition the set V if A ∪ B = V and A ∩ B = ∅. We denote the partition by the unordered pair (A, B). The cost of a partition for a graph is the sum of the weights of the edges connecting the two parts cut by the partition, i.e.,

Cut ( A, B) =

∑ wij .

i∈A, j∈B

The graph minimum cut (abbreviated min-cut) problem is to find a partition (A, B) such that the cost of the partition is minimal, i.e., minCut ( A, B ) = min ∑

i∈A, j∈B

wij .

Shi and Malik [4] proposed a normalized similarity criterion to evaluate a partition. They call this criterion normalized cut: Ncut =

Cut ( A, B) Cut ( B, A) + . assoc( A,V ) assoc( B,V )

A Graph Clustering Algorithm Based on Minimum and Normalized Cut

499

where assoc( A,V ) = ∑ i∈A, j∈V wij is the total connection from nodes in A to all the nodes in the graph, and assoc(B, V) is similarly defined. It is clear that the optimal partition can be achieved by minimizing Ncut. The theoretical attraction of the normalized cut lies in its analytical solution. The near-optimal solution of the normalized cut can be obtained by solving the relaxed generalized eigensystem. One key advantage of using normalized cut is that a good approximation to the optimal partition can be computed very efficiently. Let D represent a diagonal matrix such that Dii = ∑ j∈V wij , i.e., Dii is the sum of the

weights of all the connections to node i. Then, the problem of minimizing Ncut can be written as the expression (1), which can be reduced to a generalized eigenvalue system (2) [4]. MinNcut = min x

xT ( D − W ) x . xT Dx

( D − W ) x = λDx .

(1) (2)

where x represents the eigenvectors, in the real domain, which contain the necessary segmentation information. The eigenvector with the second smallest eigenvalue, called Fiedler vector, is often used to indicate the membership of data points in a subset. Fiedler vector provides the linear search order for the splitting point that can minimize the Ncut objective. Just as statements in introduction, when applying normalized cut to recursive partitioning data into clusters, a parameter⎯the stopping condition⎯must be given beforehand. If the parameter is inappropriate, the clustering result is very bad. An example is given in Fig. 1 and Fig. 2 that show clustering results when given different parameters. In Fig. 1, the leftmost is the original image, the second one is the clustering result when given the stopping condition Ncut < 0.25, and the other two images is the

Fig. 1. The leftmost is the original image. The second one is the clustering result when Ncut < 0.25. The other two images is the clustering result when Ncut < 0.5.

Fig. 2. The clustering result when Ncut < 1

500

J. Wang et al.

clustering result when given the stopping condition Ncut < 0.5. Fig. 2 shows the clustering result when given the stopping condition Ncut < 1. We can see that different parameters result in different clustering results. Especially, when Ncut < 0.25, the normalized cut cannot segment the original image. So, selection of the stopping condition is very important for applying normalized cut to data clustering. However, to our knowledge, there are no theoretic results on how to select the parameter. In order to avoid the issue of parameter selection, we give the following definition: Definition 1. For a graph G = (V, E, W), G is a cluster if and only if min-cut(G) ≥ (nT)/(2m), where n is the number of vertexes of G, m is the number of edges of G, and T is the sum of weights of all edges of G.

We will see that such a definition of a cluster results in good properties as described in section 3. According to the definition 1, we have the clustering algorithm MAN-C (Minimum And Normalized Cut) as shown in Fig. 3. Algorithm MAN-C(G) begin C ← Min-Cut(G); if(C < (nTG)/(2mG)) (H, H′) ← Ncut(G); MAN-C(H); MAN-C(H′); else return G; end if end Fig. 3. The MAN-C algorithm

In Fig. 3, nG is the number of vertexes of G, mG is the number of edges of G, and TG the sum of weights of all edges of G. The procedure Min-Cut(G) returns the minimum cut value C, and Ncut(G) returns two subgraphs H and H′ by implementation of normalized cut. Procedure MAN-C returns a graph in case it identifies it as a cluster, and subgraphs identified as clusters are returned by lower levels of the recursion. Single vertices are not considered clusters and are grouped into a singletons set S. The collection of subgraphs returned when applying MAN-C on the original graph constitutes the overall solution. The running time of the MAN-C algorithm is bounded by N ×(2f1(n, m)+f2(n)), where N is the number of clusters found by MAN-C, f1(n, m) is the time complexity of computing a minimum cut, and f2(n, m) is the time complexity of computing a normalized cut in a graph with n vertexes and m edges . The usual approach to solve the min-cut problem is to use its close relationship to the maximum flow problem. Nagamochi and Ibaraki [19] published the first deterministic min-cut algorithm that is not based on a flow algorithm, has the fastest running time of O(nm), but is rather complicated. Stoer and Wagner [20] published a min-cut algorithm with the same running time as Nagamochi and Ibaraki’s, but is very simple.

A Graph Clustering Algorithm Based on Minimum and Normalized Cut

501

The normalized cut can be efficiently computed using Lanczos method using the running time O(kn)+O(kM(n)) [4], where k is the maximum number of matrix-vector computations required and M(n) is the cost of a matrix-vector computation of Ax, where A= D-1/2(D-W)D-1/2 (see formula (1)).

3 Properties of MAN-C Algorithm In this section we prove some properties of the clusters produced by the MAN-C algorithm. These demonstrate the homogeneity of the solution. Definition 2. For a graph G = (V, E, W), a vertex x ∈ V, we define the average edge weight (AEW) as formula (3). That is, the AEW of a vertex x is the average weight of all edges incident on x. ∑ wxv v∈V (3) ϖx = deg(x) Theorem 1. For a cluster G = (V, E, W), the following properties hold: 1. For each pair vertices v1 and v2, if ϖv1 ≤ T/m and ϖv2 ≤ T/m, then the distance between v1 and v2 is at most two. 2. For each vertex x ∈ V, ϖx > T/(2m). 3. There are O(n2) edges in G. Proof. Assertion (1) When all edges incident on a vertex are removed, a disconnected graph results. Therefore the following inequality holds according to the definition 1:

∑ wxv ≥

v∈V

nT , 2m

for each x ∈ V .

equivalently, deg(x)ϖ x ≥

nT , 2m

for each x ∈ V ,

i.e., deg(x) ≥

nT 1 , 2 mϖx

for each x ∈ V

(4)

So, if ϖv1 ≤ T/m and ϖv2 ≤ T/m, then deg(v1) ≥ n/2, and deg(v2) ≥ n/2. Since deg(v1) + deg(v2)≥ n, therefore, the distance between v1 and v2 is at most two, as they have a common neighbor. Assertion (2) By formula (4), we have:

n T . 2deg(x) m Since deg(x) ≤ n – 1 for each x ∈ V, we have

ϖx ≥

ϖx ≥

n T T > . 2(n − 1) m 2m

(5)

502

J. Wang et al.

Assertion (3) By formula (4), summing over all vertexes in V we get:

∑ deg(x)ϖ x ≥ n

x∈V

n T n 2T = . 2 m 2m

Equivalently, 2T ≥

n 2T . 2m

That is, m≥

n2 . 4

(6)

That is, there are O(n2) edges in G.  The provable properties of MAN-C as shown in Theorem 1 are strong indication of homogeneity. By Theorem 1, the average edge weight of each vertex in a cluster G must be no less than half of average edge weight of G, and if the average edge weight of some vertex is small, then it must have more neighbors. Moreover, Theorem 1 shows that each cluster is at least half as dense as a clique, which is another strong indication of homogeneity.

4 Experimental Results Image segmentation is a hot topic in image processing. We have applied MAN-C algorithm to image segmentation. In order to apply the MAN-C to image segmentation, a similarity graph must be constructed. In our experiments, the similarity graph is constructed as follows: 1. Construct a weighted graph G by taking each pixel as a node and connecting each pair of pixels by an edge; 2. For gray image, using just the brightness value of the pixels, we can define the edge weight connecting the two nodes i and j as formula (7); 3. For color image, using HSV values of the pixels, we can define the edge weight connecting the two nodes i and j as formula (8). − ( I (i )− I ( j ))2

wij = e

σI

.

(7)

where I(i) is the intensity value for a pixel i, and σI is a parameter set to 0.1 [4]. −|| F (i )− F ( j )||22

wij = e

σI

(8)

where F(i) = [v, v⋅s⋅sin(h), v⋅s⋅cos(h)](i), and h, s, v are the HSV values [4]. Considering the space limitation, here we give three examples as shown in Fig. 4, Fig. 5, and Fig. 6. Note that the cluster is drawn using black as background, so the cluster with background is omitted.

A Graph Clustering Algorithm Based on Minimum and Normalized Cut

503

Fig. 4. The leftmost is original image and other images are clusters produced by MAN-C

Fig. 5. The leftmost is original image and other images are clusters produced by MAN-C

Fig. 6. The leftmost is original image and other images are clusters produced by MAN-C

From the results, we can see that the MAN-C algorithm performs well on all images.

5 Conclusions By a novel definition of a cluster, a graph-theoretic clustering algorithm MAN-C is proposed based on minimum and normalized cut. The minimum cut is used as the stopping condition of the recursive algorithm, and the normalized cut is used to partition a graph into two subgraphs. The MAN-C algorithm has the advantage of many existing algorithm: nonparametric clustering method, low polynomial complexity, and the provable properties. The provable properties of MAN-C together with experimental results demonstrated that the MAN-C algorithm performs well.

Acknowledgements This work was supported by Natural Science Foundation of China under Grant No 60574078, Natural Science Foundation of Guangdong Province under Grant No 06300170 and Natural Science Youth Foundation of South China University of Technology under Grant No B07-E5050920.

504

J. Wang et al.

References 1. Jain A.K., Murty M.N., Flynn P.J.: Data Clustering: A Review. ACM Computing Surveys, 31 (1999) 264–323. 2. Xu R., Wunsch II D.: Survey of Clustering Algorithms. IEEE Trans. on Neural Networks, 16(3) (2005) 645–678. 3. Kannan R., Vempala S., Vetta A.: On Clusterings: Good, Bad and Spectral. J. ACM, 51(3) (2004) 497–515. 4. Shi J., Malik J.: Normalized Cuts and Image Segmentation. IEEE Trans. on Pattern Analysis and Machine Intelligence, 22(8) (2000) 888–905. 5. Qiu H., Hancock E. R. Graph Matching and Clustering Using Spectral Partitions. Pattern Recognition, 39 (2006) 22–34. 6. Tritchler D., Fallah S., Beyene J.: A Spectral Clustering Method for Microarray Data. Computational Statistics & Data Analysis, 49 (2005) 63–76. 7. Van Vaerenbergh S., Santamaría I.: A Spectral Clustering Approach to Underdetermined Postnonlinear Blind Source Separation of Sparse Sources. IEEE Trans. on Neural Networks, 17(3) (2006) 811–814. 8. Wu Z., Leahy R.: An Optimal Graph Theoretic Approach to Data Clustering: Theory and Its Application to Image Segmentation, IEEE Trans. on Pattern Analysis and Machine Intelligence, 15 (11) (1993) 1101–1113. 9. Hartuv E., Shamir R.: A Clustering Algorithm Based on Graph Connectivity. Information Processing Letters, 76 (2000) 175–181. 10. Hagen L., Kahng, A. B.: New Spectral Methods for Ratio Cut Partitioning and Clustering. IEEE Trans. on Computer-Aided Design, 11(9) (1992) 1074–1085. 11. Ding H., He X., Zha H., et al: A Min-Max Cut Algorithm for Graph Partitioning and Data Clustering. In: Proceedings of IEEE 2001 International Conference on Data Mining, IEEE Computer Society Press, Los Almitos, CA, (2001) 107-114. 12. Soundararajan P., Sarkar S.: An In-Depth Study of Graph Partitioning Measures for Perceptual Organization. IEEE Trans. on Pattern Analysis and Machine Intelligence, 25(6) (2003) 642–660. 13. Carballido-Gamio J., Belongie S., Majumdar J. S.: Normalized Cuts in 3-D for Spinal MRI Segmentation. IEEE Trans. on Medical Imaging, 23(1) (2004) 36–44. 14. Duarte A., Sánchez Á., Fernández F., et al: Improving Image Segmentation Quality through Effective Region Merging Using a Hierarchical Social Metaheuristic. Pattern Recognition Letters, 27 (2006) 1239–1251. 15. He X., Zha H., Ding C. H.Q., et al: Web Document Clustering Using Hyperlink Structures. Computational Statistics & Data Analysis, 41 (2002) 19–45. 16. Li H., Chen W., Shen I-F.: Segmentation of Discrete Vector Fields. IEEE Trans. on Visualization and Computer Graphics, 12(3) (2006) 289–300. 17. Ngo C., Ma Y., Zhang H.: Video Summarization and Scene Detection by Graph Modeling. IEEE Trans. on Circuits and Systems for Video Technology, 15(2) (2005) 296–305. 18. Yu Stella X., Shi J.: Segmentation Given Partial Grouping Constraints. IEEE Trans. on Pattern Analysis and Machine Intelligence, 26(2) (2004) 173–183. 19. Nagamochi H., Ibaraki T.: Computing Edge-Connectivity in Multigraphs and Capacitated Graphs. SIAM J. Discrete Mathematics, 5 (1992) 54–66. 20. Stoer M., Wagner F.: A Simple Min-Cut Algorithm. J. ACM, 44(4) (1997) 585–591.

A-PARM: Adaptive Division of Sub-cells in the PARM for Efficient Volume Ray Casting Sukhyun Lim and Byeong-Seok Shin Inha University, Dept. Computer Science and Information Engineering, 253 Yonghyun-dong, Nam-gu, Inchon, 402-751, Rep. of Korea [email protected], [email protected]

Abstract. The PARM is a data structure to ensure interactive frame rates on a PC platform for CPU-based volume ray casting. After determining candidate cells that contribute to the final images, it partitions each candidate cell into several sub-cells. Then, it stores trilinearly interpolated scalar value and an index of encoded gradient vector for each sub-cell. Since the information that requires time-consuming computations is already stored in the structure, the rendering time is reduced. However, it requires huge memory space because most precomputed values are loaded in the system memory. We solve it by adaptively dividing candidate cells into different sub-cells. That is, we divide a candidate cell in which the gradient is strictly changed into a large number of sub-cells, and vice versa. By this approach, we acquire moderate images while reducing the memory size.

1 Introduction Volume visualization is a research area that deals with various techniques to extract meaningful and visual information from volume data [1], [2]. Volume datasets can be represented by voxels, and adjacent eight voxels form a cube called a cell. One of the most frequently applied techniques is direct volume rendering, producing high-quality 3D rendered images directly from volume data without intermediate representation. Volume ray casting is a well-known direct volume rendering method [1]. In general, it is composed of two steps [3]: after a ray advances through transparent region (leaping step), the ray integrates colors and opacities as it penetrates an object boundary (color computation step). Although volume ray casting produces high-quality images, the rendering speed is too slow [1], [2], [3]. Several software-based acceleration techniques for direct volume rendering have been proposed to reduce rendering time. Yagel and Kaufman exploited the use of a ray template to accelerate ray traversal, using spatial coherence of the ray trajectory [4]. However, it is difficult to apply their method to applications involving perspective projection. The shear-warp method introduced by Lacroute and Levoy rearranges the voxels in memory to allow optimized ray traversal (shearing the volume dataset) [5]. Although this method can produce images from reasonable volumes at quasi-interactive frame rates, image quality is sacrificed because bilinear interpolation is used as a convolution kernel. Parker et al. demonstrated a full-featured ray tracer on a workstation with large shared memory [6]. Unfortunately, this method requires 128 CPUs for interactive Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 505 – 512, 2007. © Springer-Verlag Berlin Heidelberg 2007

506

S. Lim and B.-S. Shin

rendering and is specialized for isosurface rendering. Although Knittel presented the interleaving of voxel addresses and deep optimizations (by using the MMXTM instruction) to improve the cache hit ratio [7], it generates only 256×256 pixel images. If we want a high-resolution image, the 256×256 image can be magnified by using graphics hardware. Mora et al. proposed a high performance projection method that exploits a spread-memory layout called object-order ray casting [8]. However, this also does not support perspective projection. Grimm et al. introduced acceleration structures [9]. They used a gradient cache, and a memory-efficient hybrid removal and skipping technique for transparent regions. This method achieves fast rendering on a commodity notebook PC, but it is not applicable to perspective projection. We call a cell of which the enclosing eight voxels are all transparent as a transparent cell, and a cell with eight opaque voxels is an opaque cell. The candidate cell means a cell that contributes to the final images. After dividing each candidate cell into Nsc3 cells, where Nsc is a representative on one axis to divide one candidate cell into several cells, we call the resulting cell as a sub-cell. That is, a single candidate cell contains Nsc3 sub-cells. Fig. 1 shows the structure of a candidate cell.

Fig. 1. Structure of a single candidate cell. A candidate cell is composed of several sub-cells. In this example, one candidate cell is composed of 125 sub-cells (that is, Nsc=5).

Recently, Lim and Shin proposed the PARM (Precomputed scAlar and gRadient Map) structure to accelerate the color computation step of the volume ray casting [10]. This structure stores the information required to compute color values of the CPU-based volume ray casting. At preprocessing time, after determining candidate cells that contribute to the final images, it divides each candidate cell into several subcells. Then, it stores trilinearly interpolated scalar value and an index of encoded gradient vector for each sub-cell. During the rendering stage, when a ray lies on an arbitrary sample point in a candidate cell after skipping over transparent region, a color value can be determined without time-consuming computations because most values to compute a color value are already stored in the PARM structure. However, because it stores the values in the system memory, it is a burden to the memory when the number of candidate cells is increased. If a sampling rate is reduced in the candidate cell in which the gradient is strictly changed, we acquire aliased or deteriorated images. Although the previous PARM structure assigns fixed numbers of sub-cells for entire candidate cells, our method adaptively divide the candidate cells into different numbers of sub-cells according to

A-PARM: Adaptive Division of Sub-cells in the PARM

507

the gradient variations. So, we call it A-PARM (the initial ‘A’ means that ‘adaptive’). We explain our data structure in detail in Section 2, and experimental results are presented in Section 3. Finally, we conclude our work.

2 A-PARM (Adaptive-Precomputed scAlar and gRadient Map) Our method focuses on reducing the memory size for the PARM structure while preserving moderate image quality. Through the next section, we can recognize the method in detail. 2.1 PARM The obvious way to accelerate the color computation step of the CPU-based ray casting is that we precompute required values in preprocessing step and refer to the values in rendering step. For that, we proposed the PARM structure in our previous work [10]. The first generation step of the PARM is to determine candidate cells from the entire volume [10]. Then, we compute required values. However, because computing the positions of sample points is not feasible because the points can lie in an arbitrary position in a candidate cell, we compute the required values per sub-cell after dividing each candidate cell into Nsc3 sub-cells. The first requirement value is trilinearly interpolated scalar value, and this value is used to determine the opacity value. The next value is a trilinearly interpolated gradient vector. However, since a vector is composed of three components (x-, y- and z-element), we require three bytes per a sub-cell. So, we apply the gradient encoding method proposed by Rusinkiewicz and Levoy [11]. This method maps the gradient vectors on a grid in each of the six faces of a cube. In conventional method [11], they exploited 52x52x6=16,224 different gradient vectors. However, we increase the grid size as 104x104x6 (approximately 2 bytes) to reduce visual artifacts. It leads to a mean error of below 0.01 radian. As a result, after computing trilinear interpolated gradient vector, we store an index.

Fig. 2. By the gradient encoding method, we store an index of the gradient vector instead of three components of interpolated vector

The ray-traversal procedure when we exploit the PARM is as follows: first, a ray is fired from each pixel on an image plane. After traversing transparent cells using the

508

S. Lim and B.-S. Shin

conventional space-leaping methods, it reaches a candidate cell. Second, after selecting the nearest sub-cell, it refers to the precomputed scalar value from the PARM. Because the scalar value on a sample point is already interpolated, it does not require time-consuming trilinear interpolation. If the value is within the transparent range, the ray advances to the next sample point. Otherwise (that is, if the scalar value is regarded as nontransparent), the ray also refers to the encoded gradient vector index. To decode representable vector from the index quickly, we use a lookup table [11]. Lastly, a color value is determined from the referred gradient vector. Those steps continue for all rays in the image plane. 2.2 A-PARM When we exploit the PARM structure, we accelerate the color computations of the CPU-based volume ray casting. However, because most required values in the PARM are calculated in preprocessing stage, it requires long preprocessing time and huge memory size. One of the challenging issues of the volume rendering is to acquire high-quality images. When we exploit the PARM structure, the simple way to satisfy it is that we can increase the number of sub-cells. However, unfortunately, the preprocessing time is increased according to increase the number of them, and we can also require large memory space. The obvious way to solve it while increasing the image quality is to adaptively divide candidate cells into different numbers of sub-cells. In our method, therefore, we increase the number of them in which the gradient is strictly changed. That is, if the gradient is over the user-defined threshold (τ), we increase the number of them, and if the gradient is below τ, we reduce the number of them. The steps are as follows; at first, we estimate three components of the gradient vector ∇f(x) = ∇f(x,y,z) for all candidate cells. To determine the candidate cells, we used original method [10]. We assume that a volume dataset is composed of N voxels, and each voxel v is indexed by v(x, y, z) where x,y,z = 0,…,N-1. Eq. (1) shows our approach when we use the central difference method [3]. It is the most common approach for gradient estimation in volume graphics [3]. If one of the eight voxels in a candidate cell is satisfied with the Eq. (1), we increase the number of sub-cells.

⎛ f ( x + 1, y, z ) − f ( x − 1, y, z ) ⎞ ⎟ 1⎜ ∇f ( x, y, z ) ≈ ⎜ f ( x, y + 1, z ) − f ( x, y − 1, z ) ⎟ > τ . 2⎜ ⎟ ⎝ f ( x, y, z + 1) − f ( x, y, z − 1) ⎠

(1)

In implement aspect, the PARM is composed of two data structures; INDEX BUFFER and DATA BUFFER. The DATA BUFFER stores precomputed values for each sub-cell. And, to refer the values from it, a data structure stored on the indices of the candidate cells is required, and this is called INDEX BUFFER. Because the Nsc value is already determined (once again, Nsc is a representative on one axis to divide one candidate cell into several cells), we refer expected values from the DATA BUFFER quickly. That is, if a ray lies on a sub-cell Sj (0 ≤ Sj < Nsc) in a candidate cell, we refer an index Ci of the candidate cell from the INDEX BUFFER. Then, we acquire a precomputed scalar value and an index of the encoded gradient vector from the DATA BUFFER by referring the index of Nscx(Ci-1)+Sj.

A-PARM: Adaptive Division of Sub-cells in the PARM

509

Those two buffers are optimal for the original PARM structure. However, since we divide candidate cells into different numbers of sub-cells according to the gradient variations, we cannot exploit them. One of the simple methods is to generate several INDEX BUFFERs. However, it is a burden to the system memory because the size of one INDEX BUFFER is identical to the volume dataset. Therefore, we require other smart approach. To solve it, we generate a modified INDEX BUFFER, named M-INDEX BUFFER. Firstly, from the previous INDEX BUFFER, we rearrange indices according to gradient variations, by using the sorting algorithm. We assume that a candidate cell is divided into Nsc13, Nsc23 or Nsc33 sub-cells according to gradients. And we divide a candidate cell into Nsc13 sub-cells if the gradient is steep, and when it is moderate we divide it into Nsc23 sub-cells. Otherwise (that is, if it is gentle), we partition it into Nsc33 sub-cells. Secondly, after searching all candidate cells from the previous INDEX BUFFER, we rearrange indices as the sequence from Nsc13 to Nsc33 sub-cells. By this approach, only one size of the INDEX BUFFER is required. Of course, it requires additional generation time compared with the conventional method. However, its time is marginal against the increment of memory size because only the O(N) time is necessary. Fig. 3 shows our M-INDEX BUFFER generation method.

Fig. 3. (Upper) when we use conventional data structure, we require three INDEX BUFFERs. Therefore, it is not adequate for our method. (Lower) although we divide candidate cells into three numbers of sub-cells according to the gradient variations, we can require only one INDEX BUFFER size by using the M-INDEX BUFFER. The red, blue, and black squares mean candidate cells as the gradients are steep, moderate, and gentle, respectively.

Besides the M-INDEX BUFFER, we store total numbers of candidate cells related to the Nsc1, Nsc2 or Nsc3, and we call them as #Nsc1, #Nsc2 or #Nsc3 (for example, in Fig. 3, they are 12, 10, and 12, respectively). In ray traversal, if a ray reaches a candidate cell and its index of the M-INDEX BUFFER is below #Nsc1 (that is, 0≤ Ci< #Nsc1), we

510

S. Lim and B.-S. Shin

acquire expected precomputed structure by referring Eq. (2). When the index of it is over #Nsc1 and below #Nsc2 (that is, #Nsc1≤ Ci< (#Nsc1+#Nsc2)), we reference the index by Eq. (3). Otherwise (that is, #Nsc1+#Nsc2≤ Ci< (#Nsc1+#Nsc2+#Nsc3)), we refer the Eq. (4). Other remaining steps are identical with the previous PARM (see Sect. 2.1). N sc1 × (Ci − 1) + S j .

(2)

(N sc1×# N sc1 ) + (N sc 2 × (Ci − # N sc1 − 1) + S j ) .

(3)

(N sc1×# N sc1 + N sc 2 ×# N sc 2 ) + (N sc 3 × (Ci − (# N sc1 + # N sc 2 ) − 1) + S j ) .

(4)

3 Experimental Results All the methods were implemented on a PC equipped with an AMD Athlon64x2TM 4200+ CPU, 2 GB main memory, and GeForceTM 6600 graphics card. The graphics card capabilities are only used to display the final images. The first and second dataset were an engine block and a bonsai with resolutions of 5123, respectively. The third dataset was an aneurysm of a human brain vessel with a resolution of 7683. Fig. 4 shows OTFs (Opacity Transfer Functions) for the datasets, and we divide candidate cells into three different numbers of sub-cells (τ) according to the gradient variations. If one scalar value of the three elements in ∇f(x,y,z) for a candidate cell is over 150, we divide the candidate cell into 64 sub-cells (that is, Nsc13=43=64). When the gradient is from 50 to 149, we partition the cell into 27 sub-cells (Nsc23=33=27). Otherwise, we divide into Nsc33=23=8 sub-cells.

Fig. 4. The OTFs for the engine block, bonsai, and aneurysm dataset, respectively

The rendering times to produce the perspective view are shown in Table 1. Our method is at least 50% faster than the method using the previous hierarchical minmax octree, and this time is almost same with the previous PARM structure (the tolerance is below 5%). The hierarchical min-max octree is widely used data structure for empty-space skipping [1], [7]. [10]. Table 2 shows the preprocessing time and required memory for each dataset. Compared with the previous PARM, we can reduce the preprocessing time and memory storage as the amount of 26%. Table 1. Rendering improvements for each dataset before/after our method (unit: %). The time for space leaping is included in our method. All results are rendered to a perspective projection.

dataset

engine

bonsai

aneurysm

average

improvements

55

49

46

50

A-PARM: Adaptive Division of Sub-cells in the PARM

511

Table 2. Preprocessing time and required memory for each dataset. The previous PARM divides all candidate cells into 64 sub-cells.

dataset the number of candidate cells (voxels) candidate cell determination time (A) (secs) generation time of the PARM (B) (secs) generation time of the A-PARM (C) (secs) total preprocessing time of the PARM (D=A+B) (secs) total preprocessing time of the A-PARM (E=A+C) (secs) efficiency of the preprocessing (D/E) (%) required memory of the PARM (F) (MB) required memory of the A-PARM (G) (MB) efficiency of the required memory (F/G) (%)

engine 5,520,128 7.24 168.02 114.09

bonsai 4,865,792 6.01 147.84 107.7

aneurysm 2,579,232 3.30 76.97 58.89

175.26

153.85

80.27

121.33

113.71

62.19

30.8 1328 907 31.7

26.1 1203 896 25.5

22.5 764 602 21.2

Fig. 5 shows the image quality. As you can see, the image quality using our structure is almost same with that using the previous PARM. We compute the Root Mean Square Error (RMSE) [12] for accurate comparisons. Compared with the results of the previous PARM, the mean value of RMSE results is about 2.04. This is marginal against reducing the preprocessing time and memory storage.

Fig. 5. Comparisons of image quality: the previous PARM (left) and A-PARM (right) for each dataset. The RMSE results are 1.15, 2.89, and 2.09. The engine is rendered to parallel projection and the bonsai and aneurysm are projected to perspective viewing. And, in case of bonsai, we magnify the soil region for accurate comparisons.

512

S. Lim and B.-S. Shin

4 Conclusion The most important issue in volume visualization is to produce high-quality images in real time. To achieve interactive frame rates on a PC platform, the PARM structure is proposed in our previous work. Although it reduces the rendering time by precomputing a trilinearly interpolated scalar value and an index of encoded gradient vector for each sub-cell, the preprocessing time is increased. Moreover the memory storage is also increased since most precomputed values are loaded in the system memory. To solve them, we adaptively divide candidate cells into different numbers of sub-cells according to the gradient variations. The experimental results show that our method reduces rendering time and produces high-quality images while reducing the preprocessing time and memory storage.

Acknowledgment This work was supported by IITA through IT Leading R&D Support Project.

References 1. Levoy, M.: Display of Surface from Volume Data. IEEE Computer Graphics and Applications, Vol. 8, No. 3 (1988) 29-37 2. Kaufman, A.: Volume Visualization. 1st ed., Ed. IEEE Computer Society Press (1991) 3. Engel, K., Weiskopf, D., Rezk-salama, C., Kniss, J., Hadwiger, M.: Real-time Volume Graphics, AK Peters (2006) 4. Yagel, R., Kaufmann, A.: Template-based Volume Viewing. Computer Graphics Forum, Vol. 11 (1992) 153-167 5. Lacroute, P., Levoy, M.: Fast Volume Rendering Using a Shear-Warp Factorization of the Viewing Transformation. Proc. SIGGRAPH 1994 (1994) 451-457 6. Parker, S., Shirley, P., Livnat, Y., Hansen, C., Sloan, P.: Interactive Ray Tracing for Isosurface Rendering. Proc. IEEE Visualization 1998 (1998) 233-238 7. Knittel, G.: The UltraVis System. Proc. IEEE Volume Visualization 2000 (2000) 71-79 8. Mora, B. Jessel, J.P., Caubet, R.: A New Object Order Ray-Casting Algorithm. Proc. IEEE Volume Visualization 2002 (2002) 203-210 9. Grimm, S., Bruckner, S., Kanitsar, A., Gröller, E.: Memory Efficient Acceleration Structures and Techniques for CPU-based Volume Raycasting of Large Data. Proc. IEEE Volume Visualization 2004 (2004) 1-8 10. Lim, S., Shin, B-S: PARM: Data Structure for Efficient Volume Ray Casting. Lecture Notes in Computer Science, Vol. 4263 (2006) 296-305 11. Rusinkiewicz, S., Levoy, M.: QSplat: A Multiresolution Point Rendering System for Large Meshes. Proc. SIGGRAPH 2000 (2000) 343-352 12. Kim, K., Wittenbrink, C.M., Pang, A.: Extended Specifications and Test Data Sets for Data Level Comparisons of Direct Volume Rendering Algorithms. IEEE Trans. on Visualization and Computer Graphics, Vol. 7 (2001) 299-317

Inaccuracies of Shape Averaging Method Using Dynamic Time Warping for Time Series Data Vit Niennattrakul and Chotirat Ann Ratanamahatana Department of Computer Engineering, Chulalongkorn University Phayathai Rd., Pathumwan, Bangkok 10330 Thailand {g49vnn, ann}@cp.eng.chula.ac.th

Abstract. Shape averaging or signal averaging of time series data is one of the prevalent subroutines in data mining tasks, where Dynamic Time Warping distance measure (DTW) is known to work exceptionally well with these time series data, and has long been demonstrated in various data mining tasks involving shape similarity among various domains. Therefore, DTW has been used to find the average shape of two time series according to the optimal mapping between them. Several methods have been proposed, some of which require the number of time series being averaged to be a power of two. In this work, we will demonstrate that these proposed methods cannot produce the real average of the time series. We conclude with a suggestion of a method to potentially find the shape-based time series average. Keywords: Time Series, Shape Averaging, Dynamic Time Warping.

1 Introduction The need to find the template or the data representative from a group of time series is prevalent in major data mining tasks’ subroutines [2][6][7][9][10][14][16][19]. These include query refinement in Relevance Feedback [16], finding the cluster centers in kmeans clustering algorithm, and template calculation in speech processing or pattern recognition. Various algorithms have been applied to calculate these data representations, often times we simply call it a data average. A simple averaging technique uses Euclidean distance metric. However, its one-to-one mapping nature is unable to capture the actual average shape of the two time series. In this case, shape averaging algorithm, Dynamic Time Warping, is much more appropriate [8]. In shape-based time series data, shape averaging method should be considered. However, most work involving time series averaging appear to avoid using DTW in spite of its dire need in the shape-similarity-based calculation [2][5][6][7][9][10] [13][14][19] without providing sufficient reasons other than simplicity. For those who use k-means clustering, Euclidean distance metric is often used for time series average. This is also true in other domains such as speech recognition and pattern recognition [1][6][9][14], which perhaps is a good indicator flagging problems in DTW averaging method. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 513 – 520, 2007. © Springer-Verlag Berlin Heidelberg 2007

514

V. Niennattrakul and C.A. Ratanamahatana

Despite many proposed shape averaging algorithms, most of them provide method for specific domains [3][11][12], such as evoked potential in medical domains. In particular, after surveying related publications in the past decade, there appears to be only one proposed by Gupta et al. [8], who introduced the shape averaging using Dynamic Time Warping, and has been the basis for all subsequent work involving shape averaging. As shown in Figure 1 (a), the average is done in pairs, and the averaged time series in each level are hierarchically combined until the final average is achieved. Otherwise, another method – sequential hierarchical averaging – has been suggested, as shown in Figure 1 (b). Many subsequent publications inherit this method under the restriction of having the power of two time series data. In this paper, we will show that the proposed method in [8] does not have associative property as claimed.

(a)

(b)

Fig. 1. Two averaging method – (a) balanced hierarchical averaging and (b) sequential hierarchical averaging

The rest of the paper is organized as follows. Section 2 explains some of important background involving shape averaging. Section 3 reveals the problems with current shape averaging method by extensive set of experiments. Finally, in section 4, we conclude with some discussion of potential causes of these inaccuracies, and suggest possible treatment to shape-based time series averaging problem.

2 Background In this section, we provide brief details of Dynamic Time Warping (DTW) distance measure, its properties, time series averaging using DTW. 2.1 Distance Measurement Distance measure is extensively used in finding the similarity/dissimilarity between time series. The two well known measures are Euclidean distance metric and DTW distance measure. As a distance metric, it must satisfy the four properties – symmetry, self-identity, non-negativity, and triangular inequality. A distance measure, however, does not need to satisfy all the properties above. Specifically, the triangular inequality does not hold for the DTW distance measure, which is an important key to the explanation why we have such a hard time in shape averaging using Dynamic Time Warping.

Inaccuracies of Shape Averaging Method Using DTW for Time Series Data

515

2.2 Dynamic Time Warping Distance DTW [15] is a well-known similarity measure based on shape. It uses dynamic programming technique to find all possible paths, and selects the one with the minimum distance between two time series. To calculate the distance, it creates a distance matrix, where each element in the matrix is cumulative distance of the minimum of three surrounding neighbors. Suppose we have two time series, a sequence Q of length n (Q = q1, q2, …, qi, …, qn) and a sequence C of length m (C = c1, c2, …, cj, …, cm). First, we create an n-by-m matrix where every (i, j) element of the matrix is the cumulative distance of the distance at (i, j) and the minimum of three neighbor elements, where 0 50. In case of vehicle speed(60km/h), lim

λ→∞

Coverall Coverall

IF H

F MIP v6

= lim

λ→∞

Csignal Csignal

+ Cdata F MIP v6 + Cdata IF H

IF H F MIP v6

≈ 0.7304, (10)

where λ > 50. In Fig.6 describes the variation of cost ratio against the FMIPv6, respectively when the radius of a cell is 1km. These two numerical expressions (9) and (10) show a small difference of the cost ratio between pedestrian speed and vehicle speed. This shows that IEEE 802.16e network is designed to support especially the mobility of the MN. So the probability of predictive mode of FMIPv6 is not affected by the speed of the MN. The mean of cost ratio of (9) and (10) is about 0.73, and hence we claim that our scheme (IFH) improves the performance of FMIPv6 by 27%.

Improved Fast Handovers for Mobile IPv6 Over IEEE 802.16e Network

5

667

Conclusions and Discussion

It is very obvious that we need to solve technical problems about delays of MIPv6 and FMIPv6, prior to deploy those two techniques over IEEE 802.16e network. The service over IEEE 802.16e network would require the MN to support seamless real-time connectivity. Using only FMIPv6 however, is not enough for the seamless connectivity yet. We propose an Improved Fast Handovers(IFH) to reduce the overall latency on the handover and to unburden the PAR earlier. We proved the IFH achieves improved performance by comparison against FMIPv6. Compared to the standard FMIPv6, the IFH acquired 27% improvement. For more realistic evaluation of our scheme, we plan to investigate with the Wibro equipments about the effect of the IFH through various applications.

Acknowledgement This work was supported by the Korea Research Foundation Grant funded by the Korean Government(MOEHRD) (KRF-2006-005-J03802).

References 1. IEEE Standard 802.16,IEEE Standard for Local and metropolitan area networks Part 16: Air Interface for Fixed Broadband Wireless Access Systems (2004) 2. Johnson, D., Perkins, C., Arkko, J.: Mobility Support in IPv6 , RFC 3775 (2004) 3. Koodli, R.: Fast Handovers for Mobile IPv6, RFC 4068 (2005) 4. Soliman, H., Malki, K. El: Hierarchical Mobility management, RFC 4140 (2004) 5. Jang, H., Jee, J., Han, Y., Park, D. S., Cha, J.: Mobile IPv6 Fast Handovers over IEEE 802.16e Networks, Internet-Draft, draft-jang-mipshop-fh80216e-01.txt (2005) 6. Ryu, S., Lim, Y., Ahn, S., Mun, Y.: Enhanced Fast Handover for Mobile IPv6 based on IEEE 802.11 Network (2004) 7. Choi, S., Hwang, G., Kwon, T., Lim, A., Cho, D.: Fast Handover Scheme for RealTime Downlink Services in IEEE 802.16e BWA System (2005) 8. Http://www.tta.or.kr 9. Jain, R., Raleigh, T., Graff, C., Bereschinsky, M.: Mobile Internet Access and QoS Guarantees using Mobile IP and RSVP with Location Registers. ICC’98 Conf. (1998) 10. Http://www.etri.re.kr 11. Mun, Y., Park, J.: Layer 2 Handoff for Mobile-IPv4 with 802.11 (2003)

Advanced Bounded Shortest Multicast Algorithm for Delay Constrained Minimum Cost Moonseong Kim1 , Gunu Jho2 , and Hyunseung Choo1, 1

School of Information and Communication Engineering Sungkyunkwan University, Korea {moonseong, choo}@ece.skku.ac.kr 2 Telecommunication Network Business Samsung Electronics, Korea [email protected]

Abstract. The Bounded Shortest Multicast Algorithm (BSMA) is a very well-known one of delay-constrained minimum-cost multicast routing algorithms. Although the algorithm shows excellent performance in terms of generated tree cost, it suffers from high time complexity. For this reason, there is much literature relating to the BSMA. In this paper, the BSMA is analyzed. The algorithms and shortcomings are corrected, and an improved scheme is proposed without changing the main idea of the BSMA. Keywords: Multicast Routing Algorithm, Delay-Bounded Minimum Steiner Tree (DBMST) Problem, and Bounded Shortest Multicast Algorithm (BSMA).

1

Introduction

Depending on the optimization goals, which include cost, delay, bandwidth, delay-variation, reliability, and so on, the multicast routing problem exists at varying levels of complexity. A Delay-Bounded Minimum Steiner Tree (DBMST) problem deals with the minimum-cost multicast tree, satisfying the delay-bounds from source to destinations. The Bounded Shortest Multicast Algorithm (BSMA) is a routing algorithm which solves the DBMST problem for networks with asymmetric link characteristics [1,2]. The BSMA starts by obtaining a minimum delay tree, calculated by using Dijkstra’s shortest path algorithm. It then iteratively improves the cost, by performing, the delay-bounded path switching. The evaluation performed by the Salama et al. [3] demonstrates that the BSMA is one of the most efficient algorithms for the DBMST problem, in terms of the generated tree cost. However, the high time complexity presents a major drawback of BSMA, because a k-shortest path algorithm is used iteratively for path switching. There are also several approaches to improve the time complexity of BSMA [4,5]. However, among them, none can peer with the BSMA in terms of cost. 

Corresponding author.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 668–675, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Advanced BSMA for Delay Constrained Minimum Cost

669

The subsequent sections of this paper are organized as follows. In Section 2, the BSMA is described. In Section 3, then the problems with the BSMA are described and the fact that the BSMA can perform inefficient patch switching in terms of delays from source to destinations without reducing the tree cost, are presented. A new algorithm is proposed, furthermore, to substitute for the k-shortest path algorithm, considering the properties of the paths used for the path switching. Finally, this paper is concluded in Section 4.

2

Bounded Shortest Multicast Algorithm

BSMA constructs a DBMST by performing the following steps: 1) Initial step: Construct an initial tree with minimum delays from the source to all destinations. 2) Improvement step: Iteratively minimize the cost of the tree while always satisfying the delay bounds. The initial tree is minimum-delay tree, which is constructed using Dijkstra’s shortest path algorithm. If the initial tree could not satisfy the given delay bounds, some negotiation would be required to relax the delay bounds of DDF (Destination Delay-bound Function). Otherwise, tree construction cannot succeed in satisfying the DDF . BSMA’s improvement step iteratively transforms the tree topology to decrease its cost monotonically, while satisfying the delay bounds. The transformation performed by BSMA at each iteration of the improvement step consists of a delay-bounded path switching. The path switching replaces a path in tree Tj by a new path with smaller cost, resulting in a new tree topology Tj+1 . It involves the following: 1) Choosing the path to be taken out of Tj and obtaining two disjoint subtrees Tj1 and Tj2 2) Finding a new path to connect Tj1 and Tj2 , resulting in the new tree topology Tj+1 with smaller cost, while the delay bounds are satisfied. A candidate paths in Tj for path switching is called a superedge. Removing a superedge from a multicast tree corresponds to removing all of the the tree edges and internal nodes in the superedge. From the definition of a superedge [1,2], a destination node or a source node cannot be an internal node of a superedge. This prevents the removal of destination nodes or the source node from the tree as a result of a path switching. At the beginning of the improvement step, BSMA sets all superedges unmarked and selects the superedge ph with the highest cost among all unmarked superedges. Removing the ph in Tj breaks Tj into two disjoint subtrees Tj1 and Tj2 , where Tj = ph ∪ Tj1 ∪ Tj2 . A delay-bounded minimum-cost path ps between Tj1 and Tj2 is used to reconnect Tj1 and Tj2 to obtain the new tree topology Tj+1 , where Tj+1 = ps ∪ Tj1 ∪ Tj2 . The cost of ps is not higher than that of ph .

670

M. Kim, G. Jho, and H. Choo

The search for the ps starts with the minimum-cost path between the two trees. If the minimum-cost path results in a violation of delay bounds, BSMA uses an incremental k-shortest path algorithm [6] to find ps . The k-shortest path problem consists of finding kth shortest simple path connecting a given source-destination pair in a graph. k-shortest path in BSMA is k-minimum-cost path between two trees and is equivalent to finding the k-shortest path between the two nodes. Because BSMA uses a k-shortest path algorithm for the path switching, its high time complexity is the major drawback. For this reason, the improvement algorithms are proposed in [4,5]. While these reduce the execution time, the performance loss in terms of tree cost is also happened.

3 3.1

Difficulties in BSMA Undirected Graph Model for BSMA

It is not mentioned clearly in [1] whether the network model for BSMA is a directed or an undirected graph. Although the figures in [1] implicate that it is undirected, later version describing the BSMA [2] and other literatures [3,4,5] related to the BSMA simulation state that the network is modeled as a directed graph. But, we argue that it should be an undirected graph.

Fig. 1. A case that is able to happen at the step for delay-bounded path switching

The following case in Fig. 1 can be happened during the delay-bounded path switching of BSMA and shows that there could be a problem. Fig. 1(a) shows a tree Tj before the path switching, of which the highest cost superedge ph is a path from s to r1 , as shown. There are two disjoint subtree Tj1 and Tj2 in Fig. 1(b) which are calculated by removing the highest cost superedge ph in Tj . At this step of the path switching, a delay-bounded shortest path ps is searched. By reconnecting the ps (Fig. 1(c)), Tj+1 is obtained. As shown in the Fig. 1(c), Tj+1 is a wrong tree, because the source s cannot send anything to destinations d1 and d2 using Tj+1 . To convert Tj+1 in Fig. 1(c) to Tc in Fig. 1(d), that is what the algorithm wants, it must be guaranteed that both path-delays and path-costs of the paths from r1 to r2 and from r2 to r3 are the same as those of paths from r2 to r1 and from r3 to r2 , respectively. If the algorithm must guarantee this, it becomes overhead to check all the links in a subtree without source s that is Tj1 or Tj2 at every step it performs the

Advanced BSMA for Delay Constrained Minimum Cost

671

path switching. Simultaneously it can severely reduce the possible cases that can make the path switching as many as there are asymmetric links in the tree. Of course, there is no routine to handle this case in BSMA. To avoid this and to contribute to the main idea of the BSMA, the network model should be assumed as the undirected graph. From now on, we use the undirected links, so that (u, v) = (v, u) ∈ E with the same link-delay and link-cost values. 3.2

Meaning of ‘Unmark’ the Superedge

The issue of this subsection is about the superedge. There are five superedges in Fig. 1(a), those are p(s, r1 ), p(r1 , d1 ), p(r1 , r2 ), p(r2 , d2 ), and p(r2 , d3 ). After the path switching, there are different superedges in Fig. 1(d), those are p(s, r3 ), p(r3 , d3 ), p(r3 , r2 ), p(r2 , d2 ), and p(r2 , d1 ). You can see that the superedges change, as the tree changes. The simple paths in Fig. 2(a), which redraws the Fig. 1(d), are all superedges. The tree in Fig. 2(b) is the result by another path switching. And the tree in Fig. 2(c) shows the same tree where the simple paths are all superedges. s

r2

d1

simple path source node destination node relay node

s

d2 (a)

ph

r3

d3

r2

d1

d2 (b)

ps

r3

s

d3

d1

d2

d3

(c)

Fig. 2. Changes of superedges as the tree changes

BSMA marks the highest-cost unmarked superedge ph when the superedge is on path switching. If the ph is switched to a delay-bounded minimum-cost path ps , BSMA unmarks all marked superedges [1,2]. If the flag value of a superedge is marked, it means there is no path to substitute for the superedge to reduce the current tree Tj . When ph is switched to ps , the tree is changed from Tj to Tj+1 . Of course, BSMA must recalculate the superedges in the new tree Tj+1 with initializing them as unmarked. 3.3

The Delay-Bounded Minimum-Cost Path ps and the Function to Get the ps Between Tj1 and Tj2

According to [1,2], one of two cases must happen when the delay-bounded minimum-cost path ps is obtained: 1. path ps is the same as the path ph ; or 2. path ps is different from the path ph . If the first case occurs, the ph has been examined without improvement of the tree cost. If the second case occurs, the tree Tj+1 would be more cost-effective

672

M. Kim, G. Jho, and H. Choo

tree than Tj . But it is possible in the second case to generate the tree Tj+1 whose cost is the same as that of Tj and end-to-end delay between a source and each destination is worse than that of Tj . The former cost-effective tree Tj+1 is what the algorithm wants. But the latter tree is not. This is because a path with the same cost as that of ph could be searched as ps . s

s

ps

ph 3

r1 2

d1

2

r2 2

d2

3

r3 4

d3

(a) T0 : minimum-delay tree

r1 2

d1

2

r2 2

d2

3

8

r3 4

d3

(b) T1

Fig. 3. Ineffective path switching in terms of end-to-end delay without any improvement of tree cost

Fig. 3 shows the example for that. The values on the simple paths in the Fig. 3 are path-delays. The tree T0 in Fig. 3(a) is a minimum-delay tree generated by using Dijkstra’s algorithm. So, it is sure that the end-to-end delays between the source s and each destination d1 , d2 , and d3 are the minimum values. The tree T1 in Fig. 3 (b) is a tree after the path switching. The ps is a simple path whose path-delay is 8 and path-cost is the same as that of ph . As a result of the path switching, BSMA generates the ineffective tree T1 in terms of end-to-end delay without any improvement of tree cost. If BSMA considers only whether the ps is equal to ph or not, this ineffective path switching is always possible as long as there could be a path whose cost is the same as that of ph while satisfying the delay-bounds. So BSMA must select a path with smaller cost than that of ph , as the ps . (We must note here that the ineffective path switching does not happen in BSMA based on greedy heuristic, since it performs the path switching when the gain is larger than zero.) Additionally, we need to think about the procedure to determine whether a path is delay-bounded or not. Whenever BSMA finds the ps , it has to determine whether a path is delay-bounded or not. That is to say, BSMA has to perform one of the followings: 1. construction of a tree from Tj1 , Tj2 and a candidate path for ps , for every candidate until finding the ps ; or 2. pre-calculation of end-to-end delays between a source and each destination for all cases that the ps would connect Tj1 with Tj2 . The total cost of a tree can be calculated without considering how the nodes in the tree are connected by the links. Because it is the sum of link-costs in the tree, we only need information about which links are in the tree and how

Advanced BSMA for Delay Constrained Minimum Cost 1

s

Connection points

d4

w

r2 2

4 3

d1

d2

d3

(a) Tj1 and Tj2

(b) Candidate path for ps

673

Delay from source

d1

d2

d3

d4

s

r2

2+w

3+w

4+w

1

s

d1

w

5+w

6+w

1

s

d2

5+w

w

7+w

1

s

d3

6+w

7+w

w

1

d4

r2

3+w

4+w

5+w

1

d4

d1

1+w

6+w

7+w

1

d4

d2

6+w

1+w

8+w

1

d4

d3

7+w

8+w

1+w

1

(c) Pre-calculation

Fig. 4. Calculation of end-to-end delays in tree

much their costs are. But end-to-end delay between two tree nodes, such as a source and a destination, is only able to be calculated after considering the tree structure. Fig. 4(a) shows subtrees on BSMA’s path switching and (b) is a candidate path for ps . If the values on the simple paths in Fig. 4(a) and (b) are the pathcosts, the total cost of a tree Tj+1 (= Tj1 ∪ Tj2 ∪ ps ) would be determined when the path-cost w of the candidate is determined (that is 1 + 2 + 3 + 4 +w), without considering how the candidate connects Tj1 with Tj2 . But if the values indicate the path-delays, it is impossible to determine the end-to-end delays between the source s and each destination d1 , d2 , or d3 without considering the tree structures of Tj1 and Tj2 . Obviously there are two ways to determine whether the candidate is delaybounded or not; one is to construct the tree by connecting Tj1 and Tj2 with the candidate; the other is to perform a pre-calculation such like Fig. 4(c). Of course, both must consider the structures of two subtrees. 3.4

Inefficient Use of k-Shortest Path Algorithm

As the literatures [4,5] mentioned, the k-shortest path algorithm [6] used for finding the ps is the major drawback of BSMA. The time complexity of the k-shortest path algorithm is O(kn3 ). The k value can be set to a fixed value to reduce execution time of BSMA. However this also reduces the performance of BSMA in terms of generated tree cost. In this subsection we propose another algorithm to substitute the k-shortest path algorithm. The proposed algorithm finds candidate paths for ps within some path-cost range and does not deterorate the performance of BSMA. According to what we described in subsection 3.3, BSMA do not need the paths whose costs are equal to or larger than that of ph while finding ps . And obviously we cannot find any path with smaller cost than that of minimum-cost path calculated by Dijkstra’s algorithm. Consequently, candidate paths for ps are the paths with the cost range that is equal to or larger than that of the minimum-cost path and smaller than that of ph . The following is the pseudo code of the proposed algorithm.

674

M. Kim, G. Jho, and H. Choo

Description for internal variables and functions p[0..(|V | − 1)], q[0..(|V | − 1)]: an array containing the node sequence for a path indexp : the index for p costp : the path-cost of p P : the set of searched paths Q: queue containing paths on searching PUSH(Q, p): insert p to Q POP(Q): take a path out of Q PROCEDURE PathSearch SameCostRange(Tj1 , Tj2 , minCost,maxCost, G) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22.

P ← ∅, Q ← ∅; for each node i ∈ Tj1 { p[0] ← i, indexp ← 0, costp ← 0; PUSH(Q, p); } while Q = ∅ { p ← POP(Q); for each neighbor node n of p[indexp ] { if (n is in the array p) then continue; \\ Because we are looking for simple paths if (n ∈ Tj1 ) then continue; \\ Because we are looking for paths between Tj1 and Tj2 if (costp + link-cost of (p[indexp ], n) ≥ maxCost) then continue; q ← p, costq ← costp , indexq ← indexp ; \\ Copy p to q q[indexq + 1] ← n; costq ← costq + link-cost of (q[indexq ], n); indexq ← indexq + 1; if (n ∈ Tj2 AND costq ≥ minCost) then P ← P ∪ {q}; if (n ∈ / Tj2 ) then PUSH(Q, q); } } return P ;

Although a number of candidates for ps are searched, BSMA use only the one satisfying the delay-bounds with smallest cost, that is ps . So BSMA do not need to find all the candidates at once. Therefore the arguments minCost and maxCost are not minimum path-cost between Tj1 and Tj2 , and path-cost of ph , respectively. The half closed interval [minimum path-cost between Tj1 and Tj2 , path-cost of ph ) is divide into several intervals to be the minCost and maxCost. BSMA iteratively increases the minCost and maxCost until either finding ps or recognizing there is no path to substitute ph . When the network size is large with many links, the memory required to calculate the candidates is also heavy as well as the high time complexity. So, this is quite practical and dose not provide any limitation to BSMA’s performance. The difference between minCost and maxCost can be adjusted according to

Advanced BSMA for Delay Constrained Minimum Cost

675

characteristic of modeled link-cost. (i.e. If the link-costs is modeled as integer values, minCost and maxCost can be the integer value x and x + 1, where x is some starting point of divided interval and 1 stands for the characteristic.) In the next section, the characteristic is notated as the sys.

4

Conclusion

BSMA is very well-known one of delay-constrained minimum-cost multicast routing algorithms. Although, its performance is excellent in terms of generated tree cost, the time complexity is very high. There are many literatures related to BSMA for this reason. We have shown that BSMA has fallacies and ambiguities then, modified it. We start on the describing BSMA [2]. Then, we show that the BSMA has fallacies, and that the BSMA can perform inefficient patch switching in terms of delays from source to destinations without reducing the tree cost. Hence, we propose an algorithm to substitute for the k-shortest path algorithm considering the properties of the paths which are used for the path switching.

Acknowledgment This research was supported by the MIC(Ministry of Information and Communication), Korea, under the ITRC(Information Technology Research Center) support program supervised by the IITA(Institute of Information Technology Assessment), IITA-2006-(C1090-0603-0046).

References 1. Zhu, Q., Parsa, M., Garcia-Luna-Aceves, J. J.: A Source-Based Algorithm for DelayConstrained Minimal-Cost Multicasting. Proceeding of INFOCOM. IEEE (1995) 377-385 2. Parsa, M., Zhu, Q., Garcia-Luna-Aceves, J. J.: An Iterative Algorithm for DelayConstrained Minimum-Cost Multicasting. IEEE/ACM Transactions Networking, Vol. 6, Issue 4. IEEE/ACM (1998) 461-474 3. Salama, H. F., Reeves, D. S., Viniotis, Y.: Evaluation of Multicast Routing Algorithms for Real-Time Communication on High-Speed Networks. Journal of Selected Areas in Communications, Vol. 15, No. 3. IEEE (1997) 332-345 4. Gang, F., Kia, M., Pissinoul, N.: Efficient Implementations of Bounded Shortest Multicast Algorithm. Proceeding of ICCCN. IEEE (2002) 312-317 5. Gang, F.: An Efficient Delay Sensitive Multicast Routing Algorithm. Proceeding of the International Conference on Communications in Computing. CSREA (2004) 340-348 6. Lawler, E.: Combinatorial Optimization: Networks and Matroids. Holt, Rinehart and Winston (1976)

Efficient Deadlock Detection in Parallel Computer Systems with Wormhole Routing Soojung Lee GyeongIn National University of Education 6-8 Seoksu-dong, Anyang, Korea 430-739 [email protected]

Abstract. Wormhole routing has been popular in massively parallel computing systems due to its low packet latency. However, it is subject to deadlock, where packets are waiting for resources in a cyclic form indefinitely. Current deadlock detection techniques are basically dependent on the time-out strategy, thus yielding unignorable number of false deadlock detections especially in heavy network loads or with long packets. Moreover, several packets in a deadlock may be marked as deadlocked, which would saturate the resources allocated for recovery. This paper proposes a simple but more accurate deadlock detection scheme which is less dependent on the time-out value. The proposed scheme presumes deadlock only when a cyclic dependency among blocked packets exists. Consequently, the suggested scheme considerably reduces the probability of detecting false deadlocks over previous schemes, thus enabling more efficient deadlock recovery and higher network throughput. Simulation results are provided to demonstrate the efficiency of the proposed scheme.

1

Introduction

Wormhole routing has been quite popular in parallel computing systems with interconnection networks, because it can significantly reduce packet latency and the requirement of packet buffers is obviated [1]. In wormhole routing, a packet is split into several flits for transmission. A header flit leads the route and the remaining flits follow in a pipelined fashion. However, it is susceptible to deadlock, where a set of packets may become blocked forever. This situation occurs when each packet in the set requests a channel resource held by another packet in the set in a circular way. Deadlock avoidance has been a traditional approach in handling deadlock problem [2]. In this approach, routing is restricted in a way that no cyclic dependency exists between channels. For example, the turn model [3,6] prohibits turns that may form a cycle. However, such design of routing algorithm results in low adaptivity and increased latency. A routing algorithm is said to be adaptive if a routing path is selected based on dynamic network conditions. A way to have higher throughput while avoiding deadlock is using the virtual channel. A number of virtual channels share a physical channel, thereby composing virtual networks and facilitating adaptive routing algorithms. In [4], virtual Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 676–683, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Efficient Deadlock Detection in Parallel Computer Systems

677

channels are divided into two classes; one for dimension-order routing with no cyclic dependency and the other for fully adaptive minimal routing. Although this scheme can provide more flexibility, it is only partially adaptive. The frequency of deadlock occurrence is reported to be very low with a fully adaptive routing algorithm [11]. Hence, it is wasteful to limit routing adaptivity for rarely occurring deadlocks. This motivated a new approach to handling deadlocks, deadlock detection and recovery. The criteria for determining deadlock is basically time-out. That is, a packet is presumed as deadlocked if it has been waiting for longer than a given threshold [7,10] or if all of its requested channels are inactive for longer than the threshold [9]. Although these schemes can detect all deadlocks, they may misinterpret simply-congested packets as deadlocked. A more sophisticated method to determine deadlock was proposed in [8], which, to our knowledge, performs best in detecting deadlocks accurately. It notices a sequence of blocked packets as a tree whose root is a packet that is advancing. When the root becomes blocked later, only the packet blocked due to the root is eligible to recover. However, the accuracy of the mechanism in [8] relies on the dependency configuration of blocked packets as well as the threshold value. In general, deadlock is recovered by ejecting deadlocked packets from the network [8] or by forwarding them through a dedicated deadlock-free path [10]. Deadlock frequency determines the performance of deadlock detection and recovery schemes. In heavily loaded networks, those packets presumed as deadlocked will saturate the recovery resources, thus degrading performance considerably. Therefore, it is required that only real-deadlocked packets use the resources, as their occurrence frequency is low [11]. However, previous schemes [7,8,9] cannot distinguish between real deadlocked and blocked packets waiting longer than the given threshold. Also, they force all the packets in deadlock to recover, although it is sufficient to choose only one packet to break the deadlock. The performance of a fully adaptive routing algorithm relies on the effectiveness of the deadlock detection mechanism associated with it. We propose simple but effective deadlock detection mechanisms which employ a special control packet named probe to detect deadlock. A blocked packet initiates a probe when all of its requested channels are inactive for the threshold and propagates it along the path of inactive channels. The presence of deadlock is presumed, if a cyclic dependency among blocked packets is suspected through probe propagation, thereby reducing the number of packets detected as deadlocked considerably over previous schemes. The performance of our schemes is simulated and compared with that of a previous scheme [8], known to be most efficient in reducing the number of false deadlock detections.

2

The Proposed Mechanism

We first describe our scheme for mesh networks. To depict resource dependencies at a point of time, the channel wait-for graph (CWFG) can be used, where vertices represent the resources (either virtual channels or physical channels for networks with no virtual channel) and edges represent either ‘wait-for’ or ‘owned-after’ relations [5,11,12]. A wait-for edge (ci , cj ) represents that there

678

S. Lee

exists a message occupying channel ci and waiting for channel cj . An owned-after edge (ci , cj ) implies the temporal order in which the channels were occupied, i.e., cj is owned after ci . In a network with virtual channels, a packet header may proceed if any of the virtual channels associated with its requested channel is available. Hence, in such network, there are multiple wait-for edges outgoing from a vertex, while there always exists only one owned-after edge, as data flits in wormhole networks simply follow their previous flits in a pipelined fashion. For the description of the proposed mechanism, we introduce the following notation. Notation 1. Assume that blocked packet m holds c and an edge (c, c ) exists in the CWFG. We refer to c as a predecessor of c with respect to m and denote the set of predecessors of c with respect to m as pred(c)|m . Also, let dim(c) and dir(c) denote the dimension and direction of a physical or virtual channel c, respectively. As a cycle is a necessary condition to form a deadlock, our scheme is motivated by a simple observation that a cycle involves at least four blocked packets in a minimal routing. From this observation, one may think of an idea that a cycle is detected by counting the number of blocked packets in sequence. That is, if the number counts up to at least four, one concludes that a potential deadlock exists. This idea obviously reduces the number of false deadlock detections over those schemes which simply measure the channel inactivity time for time-out for deadlock detection; these schemes would yield deadlock detections as much as the number of blocked packets. Obviously, our idea may detect deadlock falsely. For instance, consider a sequence of blocked packets residing within one dimension only, without turning to other dimensions; note that such sequence of blocked packets cannot form a cycle in meshes. Our idea would declare deadlock in such case, although there is none. Therefore, in order to further reduce the number of false deadlock detections, we take a different view of identifying a cycle. Namely, we focus on the number of corners, rather than on the number of blocked packets in sequence. If the number of corners formed by a sequence of blocked packets counts up to four or more, the presence of a deadlock is presumed. The above criteria for detecting deadlock based on the number of turns are likely to detect deadlock falsely. However, the frequency of deadlock occurrence is reported to be very low with a fully adaptive routing algorithm [11]. Hence, it is believed that a complex cycle would rarely occur except in a heavy network condition. Moreover, it is more important to quickly dissipate congestion by resolving simple cycles before they develop into complex ones. To implement the above idea, we employ a special control packet named probe to traverse along inactive channels for deadlock detection. Basically a probe is initiated upon time-out. However, in order not to initiate probes repetitively along the same channel, a bit, named PIB (Probe Initiation Bit), is allocated for each physical channel to indicate that a probe is initiated and transmitted through the channel. The bit is reset when the physical channel becomes active. Specifically, probes are initiated and propagated according to the following rules.

Efficient Deadlock Detection in Parallel Computer Systems

679

Rule 1. A router initiates a probe if (i) there is a blocked packet, (ii) all the channels requested by the blocked packet are inactive for threshold T O due to other blocked packets holding the channels, and (iii) PIB of any one of the channels requested by the blocked packet is zero. Let c be a channel with zero PIB, requested by the blocked packet. Also, let m be one of the packets holding c. Upon initiation, the router transmits probe(m) through c and sets the PIB of c to one. Rule 2. When a packet is delivered along a channel with PIB of one, set the PIB to zero. Rule 3. Let c be the input channel through which probe(m) is received. Let e be (c, c ), where c ∈ pred(c)|m . If e is an owned-after edge, simply forward probe(m) through c . Otherwise if e is a wait-for edge, check if all the channels requested by m are inactive for threshold T OF due to other blocked packets holding the channels. If yes, transmit probe(m ) through c , where m is one of the packets holding c . If no, discard the received probe(m). By Rule 3, a probe follows the path through which a blocked packet is routed until the header of the packet is met. At that moment, all the channels requested by the header are checked for their inactivity time. When the time exceeds T OF threshold for each of the channels, the probe is forwarded through one of the channels. Unlike T O, one may set T OF to a small value, in order not to delay the probe transmission. Let us call the process of initiation and transmission of a probe probing. The probe carries the information on the number of turns made by blocked packets which hold the channels on the probing path; a packet is said to make turn if it changes its routing dimension. The number of turns is represented by count. When a router receives a probe, it examines count carried by the probe If count is at least four, the router presumes the presence of deadlock. As count is carried by probes, we name this mechanism COUNTING scheme. Specifically, the mechanism manipulates count as follows. Rule C1. Upon initiation of a probe for blocked packet m waiting on input channel c, if the probe is to be transmitted along channel c , then (i) if dim(c) = dim(c ), then transmit the probe carrying count of zero along c . (ii) otherwise transmit the probe carrying count of one along c . Rule C2. Upon receiving a probe through channel c, if the probe is to be transmitted along channel c , then (i) if dim(c) = dim(c ), then increase the received count by one. (ii) if count ≥ 4 and (c, c ) is a wait-for edge, then declare deadlock; otherwise, transmit the probe carrying count along c . In Rules C1 and C2, whether to send the probe or not and which channel to send the probe through are all determined by Rules 1 and 3. Rule C2 allows deadlock

680

S. Lee

declaration when the probe encounters a packet header. This is to recover the potential deadlock by selecting that packet as a victim. Note that COUNTING scheme does not consider directions of turns. It may misinterpret a non-cyclic sequence of blocked packets involving turns of the same direction as deadlock. In order to better distinguish deadlock, we suggest a slight modification to COUNTING scheme that reflects the direction of turns. A bit is used for each direction in each dimension. For example, four bits are used for 2D networks; two bits for positive and negative directions in dimension zero and another two bits in dimension one. In general, 2n bits are used for nD networks. These bits are carried by probes as count is carried in COUNTING scheme. The basic idea of the modified scheme is set the bits corresponding to the turn direction and declare deadlock when at least four bits corresponding to any two dimensions are set. We call this modified scheme BITSET scheme. The basic operations for probe and PIB management are the same as described in Rules 1 to 3. Hence, we present only bit operations below. Notation 2. Probes in BITSET scheme carry 2n T Bs (Turn Bits) in nD networks. Specifically, T Bd, + and T Bd, − represent bits for positive and negative directions in dimension d, respectively. Rule B1. Upon initiation of a probe for blocked packet m waiting on input channel c, if the probe is to be transmitted along channel c , then (i) if dim(c) = dim(c ), then transmit the probe along c carrying zero T Bs. (ii) otherwise transmit the probe along c carrying T Bs with T Bdim(c), dir(c) and T Bdim(c ), dir(c ) set, where dir(c) = + if packet m was sent along positive direction of c. Otherwise, dir(c) = −. dir(c ) is set similarly. Rule B2. Upon receiving a probe through channel c, if the probe is to be transmitted along channel c , then (i) if dim(c) = dim(c ), then set T Bdim(c), dir(c) and T Bdim(c ), dir(c ) . (ii) if T Bd1, + , T Bd1, − , T Bd2, + , and T Bd2, − , for any two dimensions d1 and d2, are set and (c, c ) is a wait-for edge, then declare deadlock; otherwise, transmit the probe carrying T Bs along c . In k-ary n-cube networks, deadlock can be formed involving wraparound channels. This type of deadlock may not be detected by the rules above if it does not include sufficient number of turns. We take a simple approach for detecting such deadlock by regarding wraparound channel usage along the same dimension as a 180-degree turn, thus increasing count by two for COUNTING scheme. For BITSET scheme, it is treated as if the packet is making a 180-degree turn through one higher dimension, thus setting TBs corresponding to those two dimensions. The detailed description is omitted due to the space constraint.

3

Performance

This section presents simulation results of the proposed schemes and the scheme in [8] which is considered, to our knowledge, as most efficient in reducing the

Efficient Deadlock Detection in Parallel Computer Systems Deadlocked Packets (%)

Deadlocked Packets (%)

7

7

6

6

Deadlocked Packets (%)

681

Deadlocked Packets (%)

7 0.3

5

BIT CNT LOP

BIT CNT LOP

5

6 BIT CNT LOP

5 0.2

4

4

4

3

3

3

2

2

2

1

1

1

0

0 0 0.2 0.4 0.6 0.8 1 Normalized Load Rate (f/n/c) (a)

BIT CNT LOP 0.1

0 0 0.2 0.4 0.6 0.8 1 Normalized Load Rate (f/n/c) (b)

0 0 0.2 0.4 0.6 0.8 1 Normalized Load Rate (f/n/c) (c)

0 0.2 0.4 0.6 0.8 1 Normalized Load Rate (f/n/c) (d)

Fig. 1. Percentage of packets detected as deadlocked (a) T O=16 cycles, 16x16 meshes. (b) T O=16 cycles, 8x8x8 meshes. (c) T O=16 cycles, 16x16 tori. (d) T O=128 cycles, 16x16 meshes.

number of false deadlock detections. The simulations are run on 16x16 and 8x8x8 mesh and torus networks. Channels are with three virtual channels of buffer depth of two flits. The routing algorithm is minimal and fully adaptive. Packets are 32 flit-sized and their destinations are assumed uniformly distributed. We assume one clock cycle each for transmission of a flit over a channel, decoding a control flit, and crossing a switch. The statistics have been gathered after executing the program for 50000 clock cycles. The result of the first 10000 cycle is discarded for the initial transient period. Packets are generated exponentially with varying injection rate where the same rate is applied to all nodes. A packet presumed as deadlocked is ejected from the network and re-injected later when any of its requested channel resources is available. We measured the percentage of packets detected as deadlocked by each strategy for varying normalized load rate of flits per node per cycle (f/n/c). Figure 1 shows the results for two T O thresholds of 16 and 128 clock cycles. The results of [8] are indicated with the legend ‘LOP’ and those of COUNTING and BITSET schemes with ‘CNT’ and ‘BIT’, respectively. T OF threshold for forwarding probes is set to two cycles for all experiments. The four figures show similar behaviors approximately. As expected, the percentage increases with the load rate. BIT detects almost no deadlock except for torus networks. Overall, CNT performs better than LOP, its percentage being approximately as much as eight times lower for meshes and eleven times lower for tori. It is shown that for large T O such as 128 cycles, LOP yields less than 0.3 percentage of packets presumed as deadlocked, even at high loads for meshes. The results for CNT for the three network types are almost comparable, although packets would turn more often in 3D networks. Normalized accepted traffic measured in flits per node per cycle is presented in Figure 2 for 2D meshes. All three schemes perform comparably in most cases except for high loads and T O of 16 cycles, at which the throughput of LOP drops drastically. This is because LOP detects too many packets as deadlocked for that network condition, as shown in Figure 1(a). Note that for other network conditions, the difference in the number of deadlocked packets has no significant effect on throughput.

682

S. Lee Accepted Traffic (f/n/c)

Number of Probings (/n/c)

Accepted Traffic (f/n/c) 0.01

0.6

0.6 BIT CNT LOP

0.008

BIT CNT LOP

0.4

0.4

0.2

0.2

BIT (TO=16) CNT (TO=16) BIT (TO=128) CNT (TO=128)

0.006

0.004

0.002

0

0 0

0.2

0.4

0.6

0.8

1

Normalized Load Rate (f/n/c) (a)

0

0.2

0.4

0.6

0.8

1

Normalized Load Rate (f/n/c) (b)

0 0 0.2 0.4 0.6 0.8 1 Normalized Load Rate (f/n/c) (c)

Fig. 2. 16x16 meshes (a) Normalized accepted traffic when TO=16 cycles. (b) Normalized accepted traffic when TO=128 cycles. (c) Mean number of probe initiations per node per clock cycle.

Table 1. Mean number of probe transmissions per probing for COUNTING scheme Network Configurations 16x16 meshes and TO=16 cycles 16x16 meshes and TO=128 cycles 8x8x8 meshes and TO=16 cycles 8x8x8 meshes and TO=128 cycles

0.1 N/A N/A N/A N/A

Normalized Load Rate 0.2 0.3 0.4 0.5 0.6 0.7 5.6 6.6 6.2 6.3 6.4 6.6 N/A N/A N/A 6.3 5.5 6.7 4.5 3.9 3.8 3.9 4.1 4.2 N/A N/A 2.8 3.4 4.0 4.3

0.8 6.6 7.3 4.3 4.4

0.9 6.6 8.5 4.4 4.5

As the proposed schemes utilize a control packet, we measured its load on the router through the number of probings and the number of probe transmissions per probing. The number of probings initiated by a node per cycle is shown in Figure 2(c) for 2D meshes. Obviously, a node tends to initiate probings more often with higher loads but less frequently with higher thresholds. In particular, there is a significant difference between the results for the two thresholds. It is observed for T O of 128 cycles that there is virtually no probing activity regardless of the network load. For T O of 16 cycles, it is noted for both schemes that a node initiates approximately two probings per 1000 clock cycles at the saturated load and no more than five probings at extremely high loads. The reason for the slight difference between the results of the two schemes at high loads for T O of 16 cycles is due to the fact that CNT detects more deadlocks than BIT as shown in Figure 1(a). That is, it facilitates blocked packets to proceed, since more packets are ejected from the highly loaded network. This reduces the need for probe initiations. BIT and CNT schemes showed similar results for the other network configurations. Table 1 shows the number of probe transmissions for a probing. In general, more probes are transmitted for 2D meshes than for 3D meshes. This is simply because packets have more routing adaptivity and are less blocked in 3D meshes. It is noted that the number of probes tends to increase with the load rate for both networks, especially for T O of 128 cycles. For T O of 16 cycles, blocked packets are dissipated promptly by more frequent probings than for 128-cycle T O, which leads to lower possibility of forwarding probes.

Efficient Deadlock Detection in Parallel Computer Systems

4

683

Conclusions

This paper proposed enhanced mechanisms for deadlock detection in wormholerouted direct networks. Different from the previous schemes, the proposed schemes do not solely rely on the threshold value. A control packet propagates to find out the presence of deadlock. As the control packets traverse only along inactive channels, they virtually do not disturb normal packet progression. Simulation studies are conducted to compare the performance of the proposed schemes with that of the scheme which, to our knowledge, is most efficient in reducing the number of false deadlock detections. The simulation results demonstrate that the suggested schemes yield a substantial decrease in the number of deadlock detections in various network conditions. Consequently, our schemes outperform the previous scheme in terms of the network throughput irrespective of the time-out threshold.

References 1. Al-Tawil, K.M., Abd-El-Barr, M., Ashraf, F.: A survey and comparison of wormhole routing techniques in a mesh network. IEEE Network 11(2) (1997) 38–45 2. Park, H., Agrawal, D.P.: A generic design methodology for deadlock-free routing in multicomputer networks. Journal of Parallel and Distributed Computing 61(9) (2001) 1225–1248 3. Chiu, G.M.: The odd-even turn model for adaptive routing. IEEE Trans. Parallel and Distributed Systems 11(7) (2000) 4. Duato, J.: A general theory for deadlock-free adaptive routing using a mixed set of resources. IEEE Trans. Parallel and Distributed Systems 12(12) (2001) 1219–1235 5. Duato, J.: A necessary and sufficient condition for deadlock-free adaptive routing in wormhole networks. IEEE Trans. Parallel and Distributed Systems 6(10) (1995) 1055–1067 6. Glass, C.J., Ni, L.M.: The turn model for adaptive routing. Journal of the ACM 41(5) (1994) 874–902 7. Kim, J., Liu, Z., Chien, A.: Compressionless routing: a framework for adaptive and fault-tolerant routing. IEEE Trans. Parallel and Distributed Systems 8(3) (1997) 229–244 8. Martinez, J.M., Lopez, P., Duato, J.: FC3D: flow control-based distributed deadlock detection mechanism for true fully adaptive routing in wormhole networks. IEEE Trans. Parallel and Distributed Systems 14(8) (2003) 765–779 9. Martinez, J.M., Lopez, P., Duato, J.: A cost-effective approach to deadlock handling in wormhole networks. IEEE Trans. Parallel and Distributed Systems 12(7) (2001) 716–729 10. Pinkston, T.M.: Flexible and efficient routing based on progressive deadlock recovery. IEEE Trans. Computers 48(7) (1999) 649–669 11. Pinkston, T.M., Warnakulasuriya, S.: Characterization of deadlocks in k-ary n-cube networks. IEEE Trans. Parallel and Distributed Systems 10(9) (1999) 904–921 12. Schwiebert, L., Jayasimha, D.N.: A necessary and sufficient condition for deadlockfree wormhole routing. Journal of Parallel and Distributed Computing 32 (1996) 103–117

Type-Based Query Expansion for Sentence Retrieval Keke Cai, Chun Chen, Jiajun Bu, and Guang Qiu College of Computer Science, Zhejiang University Hangzhou, 310027, China {caikeke, chenc, bjj, qiuguang}@zju.edu.cn

Abstract. In this paper, a novel sentence retrieval model with type-based expansion is proposed. In this retrieval model, sentences expected to be relevant should meet with the requirements both in query terms and query types. To obtain the information about query types, this paper proposes a solution based on classification, which utilizes the potential associations between terms and information types to obtain the optimized classification results. Inspired by the idea that relevant sentences always tend to occur nearby, this paper further reranks each sentence by considering the relevance of its adjacent sentences. The proposed retrieval model has been compared with other traditional retrieval models and experiment results indicate its significant improvements in retrieval effectiveness. Keywords: Sentence retrieval, query type identification, query expansion.

1 Introduction Sentence retrieval is to retrieve query-relevant sentences in response to users’ queries. It has been widely applied in many traditional applications, such as passage retrieval [1], document summarization [2], question answering [3], novelty detection [4] and content-based retrieval presentation [5]. A lot of different approaches have been proposed for sentence retrieval. Most of them, however, have not been proven efficient enough. The main reason is due to the limited information expressed in sentences. To improve the performance of sentence retrieval, besides the key words in queries and sentences, additional features that are helpful for indicating sentences’ relevance should be explored. Query type, which expresses relevant information satisfying users’ information need, has been effectively used in some applications involving sentence retrieval, such as question-answering that looks for sentences containing the expected type of answer, and novelty detection where sentences involving information of special type will be considered more relevant. However, little efforts have been made to incorporate such information into the process of keyword-based information retrieval, where the most difficulty is the identification of query-relevant information types. This paper proposes a new sentence retrieval model, in which the information about query types is explored and incorporated into the retrieval process. The idea is similar to that of query expansion. The difference is that this model expands each query with the relevant information types instead of the concreted terms. In this paper, Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 684 – 691, 2007. © Springer-Verlag Berlin Heidelberg 2007

Type-Based Query Expansion for Sentence Retrieval

685

query types are defined as the types of the expected information entities, such as persons, locations, numbers, dates, times and etc, which are considered necessary for satisfying user’s request or information need. Therefore, in the retrieval process, sentences expected to be relevant should meet with the requirements both in query terms and query types. To achieve such a retrieval model, the most important factor is the identification of query types. This paper proposes a solution based on classification. This classification model makes a full use of the theory of information association, with the purpose to utilize the potential associations between terms and information types to obtain the optimized classification results. In addition to term and type information described above, another type of information is also considered in the evaluation of sentence relevance, that is, the proximity between sentences. The idea underneath is that relevant sentences always tend to occur nearby. Then, each sentence is further re-ranked by considering the relevance of its adjacent sentences. The remainder of the paper is structured as follows: Section 2 introduces the related studies in sentence retrieval. Section 3 describes the proposed sentence retrieval model and the classification approach for query type identification. In Section 4, the experimental results are presented. Section 5 concludes the paper.

2 Related Works Most exiting approaches for sentence retrieval are based on term matching between query and sentence. They are essentially the applications of algorithms designed for document retrieval [6] [7] [8]. However, compared with documents, sentences are much smaller. Thus, the performance of typical document retrieval systems on the retrieval of sentences is significantly worse. Some systems try to utilize linguistic or other features of sentences to facilitate the detection of relevant sentences. In the study of [5], factors used for ranking sentences include the position of sentence in the source document, the words contained in sentence and the number of query terms contained in sentence. In [9], semantic and lexical features are extracted from the initial retrieved sentences to filter out possible non-relevant sentences. In addition to the mining of features in sentences, some systems concentrate on the studies of features in queries. One of the most significant features about queries is the query type. In most cases, query type is defined as the entity types of the expected answer. For example, in [4], queries are described as patterns that include both query words and the required answer types. Then, these patterns are used to retrieve sentences. Sentences without the expected type of named entities will be considered irrelevant. In the domain of question answering, query type is also an important factor for sentence relevance evaluation. Given a question, the question is analyzed for their expected answer type and then submitted to retrieve sentences that contain the key words from the question or the tokens or phrase that are consistent with the expected answer type of the question. Studies have shown the positive effects of query type on sentence retrieval, which however, in the context of keyword-based retrieval, has not been fully utilized. The most difficulty is the identification of query types, which becomes one of the focuses of our studies in this paper.

686

K. Cai et al.

3 Sentence Retrieval Model Involved with Query Type The proposed sentence retrieval model with type-based query expansion measures sentence relevance from two perspectives: lexical similarity and type similarity. Lexical similarity is an indicator of the degree to which query and sentence discuss the same topic. It is always evaluated according to term matching between query and sentence. Given two terms, their similarity can be viewed from different point of views, such as, synonymy, variant, co-occurrence or others. Since this paper expects to pay more attention to the application of query type, we adopt the most basic definition for lexical similarity. If two terms are exactly the same, they are lexical similarity. Type similarity is actually to evaluate the coincidence between the query types and information types related to sentence. From sentence perspective, the related information types are defined as the types of entities contained in sentence and can be identified by using the existing named entity recognition technique. However, from query perspective, the identification of query types is a little more difficult. In this paper, this problem is solved by a solution based on classification. 3.1 Information Association Based Classification Inspired by the theory of Hyperspace Analogue to Language (HAL) [10], a novel classification approach is proposed to solve the problem of query type identification. In this approach, a special type of information association is explored, with the purpose of reflecting the dependencies between terms and information types. When such kind of associations is incorporated into query classification, information types that have the most probabilities to be associated with a query can be identified. The implementation of this classification is based on the information model reflecting the Associations between Terms and Information Types (ATIT). Construction of the ATIT Model. The construction of the ATIT model consists of two steps. The first step is to construct the HAL model given the large document corpus. The HAL model is constructed in the traditional way (See [10] for more details). Let T = {t1, t2 … tn} be the term set of the document corpus, the constructed HAL is finally represented as a n*n matrix MHAL, in which each term of T is described as a ndimension term vector Vi HAL = {MHAL(i,1), …, MHAL(i,n)}, where MHAL(i, j) describes the association strength between the terms ti and tj. The second step realizes the construction of ATIT model based on HAL model. In the ATIT model, each term ti is expected to be described as a m-dimension vector Vi ATIT = {MATIT(i,1), …, MATIT(i,m)}, where MATIT(i,j) represents the association strength between the term ti and the information type cj. The construction of ATIT can be further divided into three sub-steps: Firstly, entity type related to each term ti is discovered. In this paper, it is realized by named entity recognition (NER) [11]. However, it is noted that not all entities of all types can be well identified. To solve this problem, the manual NER approach is adopted, which is to use human generated entity annotation results to realize a part of these entity recognitions. Secondly, based on the association information provided by HAL, the association strengths between

Type-Based Query Expansion for Sentence Retrieval

687

terms and information types are calculated. Let terms of information type cj be Tc j = { tc j 1 , …, tc j k }, the association strength between term ti and cj is calculated as: c jk

∑M

M ATIT (i, j) =

HAL ( i , p )

.

(1)

p =c j 1

Implementation of Classification. The ATIT information model makes it possible to identify the relationships between query terms and information types. Then, the probability of query Q being relevant to information type cj can be evaluated by: P (c j | Q ) =

∑ P(c j | ti ) P(ti | Q)

.

ti∈Q

(2)

where P(ti | Q) means the probabilities of ti in Q. Since that queries are normally short, each P(ti | Q) can be approximately assigned an equal value, i.e., P(ti | Q) = 1/|Q|, where |Q| is the number of terms in the query. The probability P(cj | ti ) represents the association strength of the information type cj with respect to the term ti. According to Bayesian formula, it can be transformed into: P(c j | ti ) =

P (t i | c j ) * P ( c j )

.

P (t i )

(3)

where P(cj) and P(qi) are respectively the prior probability of category cj and query term qi. Here, we set them to be constants. P(ti | cj) represents the conditional

probability of ti. Based on the previous constructed ATIT model, it is defined as: P(qi | c j ) =

M ATIT (i, j ) n

∑ M ATIT (i, j )

.

(4)

i =1

Probability of query Q being relevant to information type cj can be evaluated by: rank

P (c j | Q ) =



ti∈Q

M ATIT (i, j ) n

∑ M ATIT (i, j)

.

(5)

i =1

3.2 Relevance Ranking of Sentence Experiments in [12] show that although there is no significant difference in the performance of traditional retrieval models when implemented in sentence retrieval, TFIDF technique performs consistently better than the others across different query

688

K. Cai et al.

sets. Thus, in this paper, we decided to use the vector space model (VSM) with tf.idf weighting as the retrieval framework. The retrieval model is formulated as: sim( S , Q) = λ

∑WS ,l *WQ,l + (1 − λ ) ∑WS ,t *WQ,t

l∈LS ∧ LQ

t∈TS ∧TQ

.

(6)

where, the parameter λ is used to control the influence of each component to sentence retrieval. Let A be the sentence S or the query Q, then LA and TA respectively represent the term vector and the type vector with respect to A. WA,l denotes the weight of term l in A and can be defined as log(Ltfl+1)*log(N /Ldfl+1), where Ltfl is the frequency of term l in A, N is the total number of sentences and Ldfl is the number of sentences containing the term l. WS,,t denotes the weight of the information type t in sentence S and can be defined as log(Ttft+1)*log(N/Tdft+1), where Ttft is the frequency of entities in S with type t and Tdft is the number of sentences containing entities of type t. WQ,,t denotes the weight of the information type t in query Q. It is the normalized probability of each query type and defined as: WQ ,t =

P (t | Q )

∑ p(t | Q)

.

(7)

t∈TQ

In our relevant sentence retrieval model, a sentence that contains not only query terms but also the entities of the expected information types will be considered more relevant. This is the main difference between our type-based expansion retrieval model and other word-based approaches. Since this method makes an estimation of the most possible query context by query relevant information types, the more accurate relevance judgment is expected. However, it is noted that in most cases this approach is more effective for long sentences than short sentences. To solve this problem, further optimization is considered. Conclusions in [9] show that query relevant information always appear in several continuous sentences. Inspired by this idea, this paper proposes to re-rank the sentence according to the following rules. For sentence Si that has information of the expected types, but contain few query terms, if the sentence Sj that is before or after Si has many terms in common with the query, but do not contain the expected information types, the ranks of sentences Si and Sj will be improved by: sim’(Si, Q) = sim(Si, Q)+α*sim(Sj, Q), sim’(Sj, Q) = sim(Sj, Q)+β*sim(Si, Q), where 0≤α, β≤1 and sim(Si, Q) and sim(Sj, Q) are the initial relevance values of Si and Sj evaluated by formula 6.

4 Experiments We use the collections of the TREC Novelty Track 2003 and 2004 for the evaluation of our proposed retrieval method. To validate its applicability to keyword-based retrieval, we select the TREC topic titles to formulate the queries. The first part of experiments is to verify the potential associations between query and information of certain types. Table 1 gives the statistical results. In this table, the second and third row shows the distribution of entity information in relevant sentences. Signification information about information entities has been discovered in

Type-Based Query Expansion for Sentence Retrieval

689

most relevant sentences. It implies its large potentialities in sentence retrieval. In Table 1, P(N) represents the probability of queries concerned with N types of entities. It is discovered that most queries involve one or two different types of information entities. This further approves the assumption of the underlying associations between query and information types, which, when considered carefully, will improve the effectiveness of sentence retrieval. Table 1. Statistical information about entities and entity types in relevant sentences

TREC 2003 # of relevant sentences 311.14 per query # of entities per relevant 1.703 sentence P(N) N=0 N=1 18% 38%

TREC 2004 166.86 1.935 N=2 30%

N>2 14%

N=0 32%

N=1 30%

N=2 26%

N>2 12%

Another part of experiments is to compare the classification performance of our proposed information association based approach (IA) with word pattern based approach (WP) applied in [4] and the machine learning classification approach (ML), which in this paper is support vector machine approach. We select the data obtained at the website (http://l2r.cs.uiuc.edu/~cogcomp/Data/QA/QC/) as the training and test data. Since these data are in the form of question, to make them applicable to our classification case, we have transformed them to be the form of keyword query. These formed queries contain the kernel terms in questions excepting the interrogative terms. Table 2 respectively shows the experimental results measured by macro precision, macro recall and macro F1-measure. As shown in Table 2, word pattern based approach has a higher classification precision, but a lower recall. It on one hand shows the relative accuracy of the defined word pattern and on the other hand its limited coverage. Compared with the machine learning, our proposed approach has a similar classification precision, but achieves a higher recall. In our experiments, we discovered that some valued relationships between term and information type have not been identified as expected. It is for that reason that a large amount of needed entities information cannot be properly recognized by the applied entity recognition technique. The study of entity recognition technique is out of the scope of this paper. However, it is believed that with the perfection of entity recognition technique, information association based classification approach can achieve better performance. The purpose of the results shown in Tables 3 is to compare the performance of the proposed sentence retrieval model with type-based query expansion (TPQE) to other three retrieval approaches, including TFIDF model (TFIDF), BIR model (BIR), KLdivergence model (KLD) and the traditional term-based expansion sentence retrieval model (TQE). Statistical analysis of the retrieval results shows that in the context of sentence retrieval the application of traditional document retrieval algorithm is actually to detect the existence of query terms in sentence or not. As shown in Table 3 there is no significant difference in the performances of these traditional retrieval models. However, since most sentences have fewer words, TQE approach always add noise to the sentence retrieval process. Sentences containing expansion terms may not

690

K. Cai et al.

be as relevant to the original query as sentences containing the original query terms. Table 3 shows that performance of this approach. Comparatively speaking, our proposed method considers query expansion from another perspective. It is helpful to identify the most relevant information of query and therefore avoids the introduction of large noise. As shown in Table 3, our proposed approach does do better than all other approaches. Table 2. Performances of different classification approaches

Macro precision 0.7230 0.6980 0.6895

WP ML IA

Macro recall 0.4418 0.5181 0.5903

Macro F1 0.4990 0.5778 0.6081

Table 3. Performance comparison in finding relevant sentences for 50 queries in TREC 2003 & TREC 2004

Database TFIDF BIR KLD TQE TPQE

50 queries in TREC 2003 0.349 0.292 0.330 0.304 0.385

50 queries in TREC 2004 0.228 0.178 0.2833 0.212 0.249

Table 4. Retrieval performance with or without consideration of proximity

Database Methods P@5 P@10 P@20 P@30 P@50 P@100

50 queries in TREC 2003 TPQE TPQE_PX 0.6324 0.6989 0.6559 0.6860 0.5326 0.6639 0.5919 0.6400 0.5534 0.5896 0.5026 0.5438

50 queries in TREC 2004 TPQE TPQE_PX 0.3920 0.4380 0.3875 0.4200 0.3704 0.4020 0.3680 0.4160 0.3542 0.3976 0.3185 0.3413

With the hypothesis that relevant sentences always exist in close proximity to each other, we further propose the proximity-based optimization method for re-ranking the sentences. This optimization scheme focuses on the distributions of query terms and query types involved in the proximity sentences, with the hope to reveal some relevant sentences, which when work together, can provide the integrated information satisfying user’s information need. Table 4 illustrates the experimental results, where P@n means the precision at the top n ranked documents. As shown in this table 4, retrieval with consideration of sentence proximity achieves clear improvement in retrieval effectiveness. It further validates the relevancies among adjacent sentences.

Type-Based Query Expansion for Sentence Retrieval

691

5 Conclusion Compared with the traditional sentence retrieval model, the features of this proposed retrieval model include: it views the information of query type as a factor for identifying sentences’ relevance; it re-ranks each sentence by considering the relationships between adjacent sentences. The proposed retrieval model has been compared with other traditional retrieval models. Experiment results indicate that it produces significant improvements in retrieval effectiveness.

References 1. Salton G., Allan J., Buckley C.: Automatic structuring and retrieval of large text files. Communication of the ACM, Vol. 37(2). (1994) 97-108 2. Daumé III H., Marcu D.: Bayesian query-focused summarization. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. Sydney, Australia (2006) 305–312 3. Li X.: Syntactic Features in Question Answering. In Proceedings of 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Toronto, Canada. (2003) 383-38 4. Li X., Croft W.: Novelty detection based on sentence level patterns. In Proceedings of 2005 ACM CIKM International Conference on Information and Knowledge Management. Bremen, Germany. (2005) 744-751 5. White R., Jose J., Ruthven I.: Using top-ranking sentences to facilitate effective information access. Journal of the American Society for Information Science and Technology. Vol. 56(10). (2005) 1113-1125 6. Larkey L., Allan J., Connell M., Bolivar A., Wade, C.: UMass at TREC 2002: Cross Language and Novelty Tracks. In Proceedings of 11th Text REtrieval Conference. Gaithersburg, Maryland. (2002) 721–732 7. Schiffman B.: Experiments in Novelty Detection at Columbia University. In Proceedings of 11th Text REtrieval Conference. Gaithersburg, Maryland. (2002) 188-196 8. Zhang M., Lin C., Liu Y., Zhao L., Ma S.: THUIR at TREC 2003: Novelty, Robust and Web. In Proceedings of 12th Text REtrieval Conference. Gaithersburg, Maryland. (2003) 556-567 9. Collins-Thompson K., Ogilvie P., Zhang Y., Callan J.: Information filtering, Novelty Detection, and Named-Page Finding. In Proceedings of 11th Text REtrieval Conference. Gaithersburg, Maryland. 107-118 10. Lund K., Burgess C. Producing High dimensional Semantic Spaces from Lexical Cooccurrence. Behavior Research Methods, Instruments, & Computers, Vol. 28, (1996) 203208 11. Andrew B. A Maximum Entropy Approach to Named Entity Recognition. Ph.D. thesis, New York University, (1999) 12. Allan J., Wade C., Bolivar A.: Retrieval and Novelty Detection at the Sentence Level. In Proceedings of 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Toronto, Canada. (2003) 314-321

An Extended R-Tree Indexing Method Using Selective Prefetching in Main Memory Hong-Koo Kang, Joung-Joon Kim, Dong-Oh Kim, and Ki-Joon Han School of Computer Science & Engineering, Konkuk University, 1, Hwayang-Dong, Gwangjin-Gu, Seoul 143-701, Korea {hkkang,jjkim9,dokim,kjhan}@db.konkuk.ac.kr

Abstract. Recently, researches have been performed on a general method that can improve the cache performance of the R-Tree in the main memory to reduce the size of an entry so that a node can store more entries. However, this method generally requires additional processes to reduce information of entries. In addition, the cache miss always occurs on moving between a parent node and a child node. To solve these problems, this paper proposes the SPR-Tree (Selective Prefetching R-Tree), which is an extended R-Tree indexing method using selective prefetching according to node size in the main memory. The SPR-Tree can produce wider nodes to optimize prefetching without additional modifications on the R-Tree. Moreover, the SPR-Tree can reduce the cache miss that can occur in the R-Tree. In our simulation, the search, insert, and delete performance of the SPR-Tree improved up to 40%, 10%, 30% respectively, compared with the R-Tree. Keywords: SPR-Tree, Extended R-Tree, Cache Performance, Cache Miss, Main Memory.

1 Introduction Recently, with the speed gap being broader between the processor and the main memory, how effectively to use the cache memory in the main memory-based index is making a critical impact on the performance of the entire system[1,5]. The R-Tree is similar to the B-Tree, but is used for spatial access methods for indexing multidimensional data[2]. Since the R-Tree is originally designed to reduce disk I/O effectively for the disk-based index, the node size is optimized for disk block. However, the R-Tree is not suitable for the cache memory with a small block. Delay time caused by cache miss accounts for a significant part of the entire performance time[10]. Especially, when the R-Tree, as in a main memory DBMS, resides in the main memory, disk I/O does not affects the entire performance seriously. Consequently, studies on the index structure and algorithms with the improved cache performance are being carried out by numerous researchers in many ways[3,5-8]. Rao and Ross pointed out the importance of the cache performance in designing a main memory index and proposed the CSS-Tree(Cache-Sensitive Search Tree) which has a faster search performance than the Binary Search Tree or the T-Tree in the Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 692 – 699, 2007. © Springer-Verlag Berlin Heidelberg 2007

An Extended R-Tree Indexing Method

693

read-only OLAP environment[6]. They also proposed the CSB+-Tree which is an extension of the CSS-Tree and can improve the cache performance of the B+-Tree[7]. Sitzmann and Stuckey proposed the pR-Tree(partial R-Tree), which adjusts the size of the R-Tree node to that of cache block and deletes unnecessary information within MBR(Minimum Bounding Rectangle) to store more information in a node[8]. Kim and Cha proposed the CR-Tree(Cache-conscious R-Tree) which compresses MBR of an entry to include more entries in a node[3]. The typical approach for cache performance improvement is to minimize cache misses by reducing the size of the entry to increase the fanout and storing more entries in a node. But, in this approach, the update performance is generally lowered due to additional operations to recover the compressed entry information and cache miss occurring when moving between nodes still results in the lowered performance of the entire system. In order to solve the above problems, this paper proposes the SPR-Tree(Selective Prefetching R-Tree), an extended R-Tree indexing method, which applies the selective prefetching to the R-Tree in the main memory. The SPR-Tree loads the child node onto the cache memory in advance to extend the size of the node to be optimized for prefetching without transforming the R-Tree radically and reduce cache misses occurring when moving between nodes. The performance improvement of the SPRTree using selective prefetching is in proportion to the size and the number of the nodes to access. Therefore, it is more effective in the range query than in the point query. The rest of this paper is organized as follows. Chapter 2 introduces selective prefetching and then analyzes the existing cache conscious index methods. Chapter 3 explains the structure of the SPR-Tree and algorithms for the SPR-Tree. In Chapter 4, the performance of the SPR-Tree is analyzed and the results of the SPR-Tree evaluation are presented. Finally, the conclusion is provided in Chapter 5.

2 Related Works This chapter will introduce selective prefetching and analyze the various existing cache conscious index methods. 2.1 Selective Prefetching The cache memory is used to provide data to the processor in a fast way. Located between the main memory and the processor, the cache memory generally consists of 2 layers; L1 cache and L2 cache. L1 cache is located between the register and L2 cache, while L2 cache is located between L1 cache and the main memory[4]. When the processor is accessing data, if the data is present in the cache memory, it is called "cache hit" and if the data is not present, it is called "cache miss". The cache block is the basic transfer unit between the cache memory and the main memory. The current systems tend to have bigger size of the cache block and largercapacity of the cache memory. Typical cache block size ranges from 32 bytes to 128 bytes. Generally, the data cache follows the basic principle of the data locality. A tree

694

H.-K. Kang et al.

structure has the low data locality as data to refer to is accessed through the pointer. Therefore, in order to improve the cache performance in the tree structure, the amount of data to access should be reduced or selective prefetching should be executed. The selective prefetching is a technique to selectively load data into the cache memory in advance to accelerate the program execution. Especially, the selective prefetching can reduce cache misses by loading data which does not exist in the cache memory before the processor requests it. In order to reduce cache misses in the RTree, the selective prefetching should be used to reduce memory delay occurring when accessing nodes overall. The selective prefetching is controlled in two ways; the hardware-based prefetching where the prefetching is automatically carried out by the processor and the software-based prefetching where a prefetching command is inserted into the program source code[9]. 2.2 Cache Conscious Index Methods The CSB+-Tree is a variant of the B+-Tree, removing all child node pointers except the first child node pointer to store child nodes consecutively in order to reduce cache misses in the B+-Tree[7]. But, this method of eliminating pointers is not so effective in the R-Tree where pointers account for a relatively small part. And since child nodes are consecutively stored in the CSB+-Tree, every update operation requires reorganization of consecutively arranged child nodes. The pR-Tree is a variant of the R-Tree, removing child MBR's coordinate values overlapped with those of parent MBR to reduce cache misses in the R-Tree[8]. This method also eliminates the pointers, like in the CSB+-Tree, and shows better performance when the number of entries is small. However, this method has worse performance as the number of entries increases, since the number of child MBR's coordinate values overlapped with those of parent MBR is decreased. In addition, due to the elimination of overlapped child MBR's coordinate values, additional operations are needed for reorganization of the eliminated coordinate values, which lowers the update performance. The CR-Tree is a kind of the R-Tree that compresses MBR, which accounts for most of indexes, and uses the compressed MBR as a key[3]. In the CR-Tree, MBR is compressed according to the following procedure; MBR of the child node is represented in relative coordinates to MBR of the parent node and it is quantized so that it can be represented in definite bits. However while compressing MBR in the CR-Tree, a small error can occur and this may produce a wrong result(i.e., false hit). Moreover, additional operations for reorganization of the compressed MBR in the update operation can lower the update performance.

3 SPR-Tree This chapter will describe the SPR-Tree, a main memory-based R-Tree using selective prefetching. First, the structure and characteristics of the SPR-Tree will be given and then the algorithms used in the SPR-Tree also will be suggested.

An Extended R-Tree Indexing Method

695

3.1 Structure The SPR-Tree, similar to the R-Tree, has the root node, the intermediate node, and the leaf node. All operations on the SPR-Tree start from the root node and the references to real data objects exist only in the leaf node. Figure 1 illustrates the node structure of the SPR-Tree. The SPR-Tree uses a rectangle, which is a rectilinear shape that can completely contain other rectangles or data objects.

Fig. 1. Node Structure of the SPR-Tree

In Figure 1(a), P and N represent the node level and the number of entries in a node, respectively. Each of E1, E2, … , En (n=3+5k, k 0) represents an entry which has two types, that is, an entry for the root node or the intermediate node and an entry for the leaf node. Figure 1(b) shows the entry for the root node or the intermediate node, where RECT is a rectangle which completely contains all rectangles in the child node’s entries and p is an address of a child node. Figure 1(c) represents the entry for the leaf node, where RECT is a rectangle which completely contains a data object and oid refers to the data object. Since the SPR-Tree nodes adjust the number of entries suited to the cache block; the SPR-Tree decides the node size in proportion to the cache block size. Generally, the cache block size can be 32 or 64 bytes. If the cache block size is 32 bytes, the node size becomes 64+160k (k 0) bytes and if it is 64 bytes, the node size becomes 64+320k (k 0). Figure 2 shows an example of the SPR-Tree. As the Figure 2 shows, rectangles can enclose a single data object or one or more rectangles. For example, rectangle R8, which is at the leaf level of the SPR-Tree, contains data object O. Rectangle R3, which is at the intermediate level of the SPR-Tree, contains rectangles R8, R9, and R13. Rectangle R1, which is at the root level, contains rectangles R3 and R4. In Figure 2, a prefetching node group enclosed by a dotted line is determined according to the node size.







3.2 Algorithms This section will describe the search, node split insert, and delete algorithms of the SPR-Tree in detail.

696

H.-K. Kang et al.

Fig. 2. Example of the SPR-Tree

3.2.1 Insert Algorithm The insert operation repeats, from the root node down to the leaf node, a process of using lower node's rectangle information contained in entries of each node to determine whether the size expansion of the node can be minimized or not when an object is inserted into the leaf node. At this time, if the leaf node becomes full, then a node split occurs. In the insert algorithm of the SPR-Tree, prefetching is carried out while looking for the leaf node to insert an entry. Figure 3 shows the insert algorithm of the SPR-Tree. 3.2.2 Delete Algorithm The delete operation repeats, from the root node down to the leaf node, a process of using lower node's rectangle information contained in entries of each node to determine whether a query region is contained or overlapped in the lower nodes. At this time, if an entry is deleted and the number of remaining entries is below the minimum number of entries in the leaf node, then the leaf node is deleted and its remaining entries are reinserted into the SPR-Tree. The delete algorithm of the SPR-Tree uses a prefetching command based on the node size. The child node to be accessed is prefetched after the current node according to the node size. Figure 4 shows the delete algorithm of the SPR-Tree. 3.2.3 Search Algorithm The search operation descends the SPR-Tree from the root node to the leaf node. And, it repeats a process of using lower node's rectangle information contained in entries of each node to determine whether the lower node contains or overlaps a query region or not. If the lower node is contained or overlapped with the query region, the search operation follows the lower node as the root node until it reaches the leaf node. The search algorithm of the SPR-Tree uses a prefetch command to prefetch a child node to be accessed after the current node. If the node has few entries, the SPR-Tree makes a prefetching node group using some nodes at the same level and prefetches it. While the node has many entries, it prefetches only a child node to be accessed into the cache memory. Figure 5 shows the search algorithm of the SPR-Tree.

An Extended R-Tree Indexing Method

Fig. 3. Insert Algorithm

697

Fig. 4. Delete Algorithm

3.2.4 Node Split Algorithm When a leaf node is full during the execution of an insert operation in the SPR-Tree, the node split operation must be executed. First, the entries in the node are divided into two nodes with minimum rectangle expansion. If the number of entries in the parent node exceeds the maximum number of entries in the parent node due to the node split, the parent node also must be split. The node split algorithm prefetches the current node before split and creates two new nodes to distribute the entries of the current node. Figure 6 shows the node split algorithm of the SPR-Tree.

Fig. 5. Search Algorithm

Fig. 6. Node Split Algorithm

4 Performance Evaluation The system used in the performance evaluation was equipped with Intel Pentium III 1GHz, 1GB main memory, and L1 and L2 caches whose block size is 32 bytes. As a test data, we created 10,000 objects, a square with side length of 0.0001 on the average, uniformly distributed in a square with side length of 1 as the whole area.

698

H.-K. Kang et al.

Figure 7 shows the performance results of the search operation. The query region was supposed to occupy 30%~70% of the whole area. In Figure 7, the SPR-Tree has better search performance than the R-Tree and improvement through prefetching appears more consistent, as memory delay is reduced while accessing nodes. The search performance of the SPR-Tree was improved up to 35% over the R-Tree. Figure 8 shows the performance results of the search operation in a skewed data set. As shown in Figure 8, the larger the node size is, the better search performance it has. This is because there is more reduced memory delay time due to prefetching, as the spatial objects are skewed, which increases overlapping between nodes and the number of nodes to access. The search performance of the SPR-Tree was improved up to 40% over the R-Tree for skewed data set.

Fig. 7. Performance of Search Operations

Fig. 8. Performance of Search Operations in Skewed Data Set

Figure 9 shows the performance results of the insert operation. The spatial objects were inserted and the side length of the objects was 0.0001 on the average. As shown in Figure 9, when the node size is larger, the insert time increases, but we can see that the performance improvement rate increases due to prefetching. This is because when prefetching is used, larger node size brings higher performance. The insert performance of the SPR-Tree showed up to 10% improvement over the R-Tree. Figure 10 shows the performance results of the delete operation. We deleted objects involved in the region whose side length was 0.001 on the average. In Figure 10, the larger node size generally leads to the better performance of the delete operation and the performance improvement through prefetching is consistent, as memory delay

Fig. 9. Performance of Insert Operations

Fig. 10. Performance of Delete Operations

An Extended R-Tree Indexing Method

699

time reduced by prefetching is consistent while accessing nodes. In the evaluation, the delete performance of the SPR-Tree was improved up to 30% over the R-Tree.

5 Conclusion Recently an approach that can improve the main memory-based R-Tree index structure by reducing the node size was proposed. However, in this approach, the update performance is lowered due to additional operations to recover the compressed entry information and, still, cache misses occurring when moving between nodes contributes to the lowered performance of the entire system. In order to solve the above problems, this paper proposes the SPR-Tree which applies the selective prefetching to the R-Tree to reduce cache misses as well as eliminate additional cost in the update operation. The SPR-Tree optimizes the node size for prefetching and minimizes cache misses by prefetching child nodes when moving between nodes. In the performance evaluation, the SPR-Tree was improved up to 40% in the search operation, up to 10% in the insert operation, and up to 30% in the delete operation over the R-Tree.

Acknowledgements This research was supported by the MIC(Ministry of Information and Communication), Korea, under the ITRC(Information Technology Research Center) support program supervised by the IITA(Institute of Information Technology Assessment).

References 1. Chen, S., Gibbons, P. B., Mowry, T. C., Valentin, G.: Fractal Prefetching B+-Trees : Optimizing Both Cache and Disk Performances. Proceedings of ACM SIGMOD Conference (2002) 157-168. 2. Guttman, A.: R-Trees: a Dynamic Index Structure for Spatial Searching. Proceedings of ACM SIGMOD Conference (1984) 47-54. 3. Kim, K. H., Cha, S. K., Kwon, K. J.: Optimizing Multidimensional Index Tree for Main Memory Access. Proceedings of ACM SIGMOD Conference (2001) 139-150. 4. Mowry, T. C., Lam, M. S., Gupta, A.: Design and Evaluation of a Compiler Algorithm for Prefetching. Proceedings of International Conference on Architectural Support for Programming Languages and Operating Systems (1992) 62-73. 5. Park, M. S., Lee, S. H.: A Cache Optimized Multidimensional Index in Disk-Based Environments. IEICE Transactions on Information and Systems, Vol.E88-D (2005) 1940-1947. 6. Rao, J., Ross, K. A.: Cache Conscious Indexing for Decision-Support in Main Memory. Proceedings of International Conference on VLDB (1999) 78-89. 7. Rao, J., Ross, K. A.: Making B+-Trees Cache Conscious in Main Memory. Proceedings of ACM SIGMOD Conference (2000) 475-486. 8. Sitzmann, I., Stuckey, P. J.: Compacting Discriminator Information for Spatial Trees. Proceedings of Australasian Database Conference (2002) 167-176. 9. VanderWiel, S. P., Lilja, D. J.: Data Prefetch Mechanisms. ACM Computing Surveys, Vol.32 (2000) 174-199. 10. Zhou, J., Ross, K. A.: Buffering Accesses of Memory-Resident Index Structures. Proceedings of International Conference on VLDB (2003) 405-416.

Single Data Copying for MPI Communication Optimization on Shared Memory System Qiankun Miao1 , Guangzhong Sun1, , Jiulong Shan2 , and Guoliang Chen1 Anhui Province-MOST Key Co-Lab of High Performance Computing and Its Applications, Department of Computer Science, University of Science and Technology of China, Hefei, 230027, P.R. China 2 Microprocessor Technology Lab, Intel China Research Center, Beijing, China [email protected], [email protected], [email protected], [email protected] 1

Abstract. Shared memory system is an important platform for high performance computing. In traditional parallel programming, message passing interface (MPI) is widely used. But current implementation of MPI doesn’t take full advantage of shared memory for communication. A double data copying method is used to copy data to and from system buffer for message passing. In this paper, we propose a novel method to design and implement the communication protocol for MPI on shared memory system. The double data copying method is replaced by a single data copying method, thus, message is transferred without the system buffer. We compare the new communication protocol with that in MPICH an implementation of MPI. Our performance measurements indicate that the new communication protocol outperforms MPICH with lower latency. For Point-to-Point communication, the new protocol performs up to about 15 times faster than MPICH, and it performs up to about 300 times faster than MPICH for collective communication.

1

Introduction

Message Passing Interface (MPI) [1] is a standard interface for high performance parallel computing. MPI can be used on both distributed memory multiprocessor and shared memory multiprocessor. It has been successfully used to develop many parallel applications on distributed memory system. Shared memory system is an important architecture for high performance computing. With the advent of multi-core computing [4], shared memory system will draw more and more attention. There are several programming models for shared memory system, such as OpenMP [2] and Pthread [3]. However, we still need to program on shared memory system with MPI. First, it is required to reuse the existing MPI code on distributed memory system for quickly developing applications on shared memory system. Second, applications developed using MPI are compatible across a wide range of architectures and need only a few modifications for performance tune-up. Third, the future computation will 

Corresponding author.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 700–707, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Single Data Copying for MPI Communication Optimization

701

shift to multi-core computation. Consequently, exploiting the parallelism among these cores becomes especially important. We must figure out how to program on them to get high performance. MPI can easily do such works due to its great success on parallel computing today. It is difficult to develop an efficient MPI application on shared memory system, since the MPI programming model does few considerations of the underlying architecture. In the default communication protocol of MPI, receiver copies data from system buffer after sender copied the data to the system buffer. Usually a locking mechanism is used for synchronization on shared memory system, which has a high overhead. Therefore, MPI programs suffer severe performance degradation on shared memory system. In this paper, we propose techniques to eliminate these performance bottlenecks. We make use of the shared memory for direct communication bypass buffer copying instead of double data copying. Meanwhile, A simple busy-waiting polling method is used to reduce expense of locking for synchronization. To evaluate these techniques, some experiments are carried out on basic MPI primitives. We make comparisons of the performance between the new implementation and the original implementation in MPICH [5]. Results indicate that the new implementation is able to achieve much higher performance than MPICH do. There are other works studying on the optimized implementation of the communication in MPI programs on shared memory system. A lock-free scheme is studied on NEC SX-4 machine [6]. They use a layered communication protocol for a portable consideration. A shared memory communication device in MPICH is conducted for Sun’s UltraEnterprise 6000 servers [7]. They use a static memory management similar as we describe in Section 2. TMPI [8] use a threadlevel approach for communication optimization. TMPI can achieve significant performance advantages in a multiprogrammed environment. We consider the condition only a process per processor. This condition is usually happened in real computation. Our implementation is not restricted to a certain machine. We could achieve lower latency than the native MPI implementation. The rest of the paper is organized as follows. Next section introduces the default communication implementation of MPI and discusses its drawbacks on shared memory system. Section 3 describes the design and implementation of the new approach used in this paper, which makes efficient communication on shared memory system. Section 4 describes our evaluation methodology and presents the evaluation results. Section 5 concludes the paper and presents the future work.

2

Motivation

In traditional implementation of MPI, a general shared memory device is needed, which can be used on many differently configured systems. There is a limitation that only a small memory space shared by all processes can be allocated on some systems. As a result, the communication between two processes is through a shared memory system buffer [9]. The system buffer can be accessed by all the processes in a parallel computer. We indicate this data transmission procedure in Fig. 1.

702

Q. Miao et al. Shared System Buffer

Mem ory Spac e of A

Mem ory Spac e of B

Fig. 1. Communication in original MPI, the message packet is transmitted by copying them into and out of the shared system buffer

A typical implementation of communication between two processes could be as follows. Suppose process A intends to send a message to process B. We use A to denote process A and B to denote process B following. There is a free shared memory segment pre-allocated and a synchronization flag can be accessed by A and B. The synchronization flag is used to indicate whether the shared memory segment is ready for A to write to or for B to read from. At the beginning, A checks the synchronization flag. If the shared memory segment is ready for writing, A copies data to be transmitted from its user buffer into the shared memory segment. Then A sets the synchronization flag after finishing the data copying. This indicates that the data in the shared memory segment is ready to be copied to B’s user buffer. A can either wait for an acknowledgement after copying or it can continue execution. If the shared memory segment is not ready for A, it chooses a back-off strategy, and tries again later. Meanwhile, B can check for a forthcoming data packet by probing the synchronization flag. When it finds the flag is set, B copies the data in shared memory segment to its own user buffer. After the data transmitted completely, B clears the synchronization flag. At this time new communication can start. MPICH implements the above mechanism as an abstract device interface ch shmem [7]. The communication strategy above is inefficient for a few reasons. 1. A double data copying protocol is required for each data exchange. But this is unnecessary and will increase the burden of memory/cache system. It will result in high latency of the communication. 2. A shared system buffer is required where the data is copied to and from. This is an extreme waste of capacity of memory. On shared memory system, local memory and cache capacity as well as aggregate bandwidth will limit the scalability of MPI implementation.

Single Data Copying for MPI Communication Optimization

703

3. The cost of back-off strategy for synchronization is extremely large. The synchronization cost can adversely affect communication performance for short message. 4. Furthermore, for those collective communications (e.g. MPI Bcast), the extra copy and complex synchronization will aggravate these problems. In a word, the critical problem for the communication on shared memory system is how to reduce the memory/cache capacity and the waiting time for synchronization. In the next section, we will propose techniques to solve these problems.

3

Design and Implementation

To solve the problems existing in MPI for shared memory system, we design new communication protocols. We use primitives (IPC/shm) to create process level shared memory segment for message to be transmitted. Communication among processes uses a single data copying protocol. Considering usually only one process per processor we choose a simple busy-waiting polling strategy for synchronization. 3.1

Optimized Communication Protocol

In Section 2, we have described the default implementation of communication in MPI. In order to transmit data among processes, a double data copying protocol is required. One is used to copy the data into shared memory, and the other is used to copy the data from the shared memory. However, the double data copying can be reduced to only one data copying when the sender process need retain an independent copy of the data. Even no data copying is required when the sender process needn’t retain a copy of the data or all the processes share the same data structure and do operation on disjoint parts of the data structure. We can allocate a shared memory segment for the data to be transmitted. Thus, every process can access this shared segment. Data in the shared memory segment can be copied to the receiver process’s user buffer either in the shared memory or in its private space. If all the processes deal with disjoint parts of the same data structure, we need no data transmission because after one process updated the data all other processes can see the update immediately and they can use IPC/shm to directly read the data. This technique is able to obtain lower latency and less memory consumption, because it needs only one or no data copying and no extra system buffer. Fig. 2 illustrates this technique. According to the above discussion, we can model the required time for sending a message packet with size of n bytes by the following formulas. We use Toriginal to indicate the communication time of the original version and Toptimized to indicate that of our optimized implementation. Original MPI device using two data copying: Toriginal = 2Tdatacpy + Tsyn

(1)

704

Q. Miao et al.

IPC

Mem ory Spac e of A(shared)

Mem ory Spac e of B

Fig. 2. Optimized communication with only a single buffer copying by employing IPC

Optimized implementation using single data copying: Toptimiezed = Tdatacpy + Talloc + Tf ree + Tbwsyn

(2)

Optimized implementation using no data copying: Toptimiezed = Talloc + Tf ree + Tbwsyn

(3)

Where Tdatacpy represents the time for one data copying, Tsyn represents the time for communication synchronization in original version, Talloc and Tf ree represent the time for shared memory allocating and deleting respectively, Tbwsyn represents the time for communication synchronization using busy-waiting strategy in our implementation. In general case, the cost of the shared memory segment allocation and free is very little compared with the cost of one data copying. So, Toriginal is larger than Toptimized . This indicates that we could achieve higher performance using the optimized communication protocol. 3.2

Busy-Waiting Polling Strategy

A synchronization mechanism is used to prevent the processes from interfering with each other during the communication procedure. It defines the time when sender process makes data ready and the time when receiver process completes the data copying. Usually the mechanism is provided by locks and semaphores on shared memory system. Unfortunately, typical implementations of lock tend to cause a system call, which is often very time-consuming. So, the cost of synchronization by using a lock is too expensive to provide high performance. When the lock is not free, a back-off strategy is used. This may delay the starting time of communication. Though the lock is already free, it may still need to wait for the process to be active again. With the assumption that there is usually only one application process per processor and processor resources are not needed for other tasks, we use a simple busy-waiting polling approach to replace exponential back-off strategy. An application process repeatedly tests the synchronization flag as frequently as possible

Single Data Copying for MPI Communication Optimization

705

to determine when it may access the shared memory. Once the process has found the synchronization flag switched, it would immediately detect the change. Consequently, the data transmission can start without delay. This polling strategy would reduce the time for synchronization on shared memory system when there is only one process per processor.

4

Performance Evaluation

We conduct experiments to study the performance of our implementation. Latency time is collected for multiple repetitions of each operation over various message sizes between 0 byte and 1 megabytes. We use latency time here to denote the total transmission time of a message among processes. For 0 byte message, latency time is only the overhead of communication start-up and end. All the results given are averaged over multiple runs. 4.1

Platform Configuration

The target system is a 16-way shared memory multiprocessor system running Suse Linux 9.0 platform. It has 16 x86 processors running at 3.0 GHz, 4 levels of cache with each 32MB L4 cache shared amongst 4 CPUs. As for the interconnection, the system uses two 4x4 crossbars. We use MPICH-1.2.5.2 (affiliated with device ch shmem and gcc 3.3.3) library form Argonne National Laboratory to generate the executables. Each process is bound to a processor to prevent operating system from migrating processes among processors. 4.2

Performance Evaluation Results

Point-to-Point communication transmits data between a pair of processes. We choose the standard MPI Send/Recv for our test. Collective communication is often used in real applications. Usually it involves a large number of processes. This means that there is a large amount of data to be transmitted among the involved processes. So, its performance heavily depends on the implementation. We investigate MPI Bcast, MPI Gather, MPI Scatter, MPI Alltoall, MPI Reduce to evaluate the collective communication with new communication approach. We only present the results of MPI Bcast due to the limited paper size. In MPI Bcast a root process broadcasts a message to all the other processors. We present the experimental results in Fig. 3. We compare two implementations of Send/Recv in the left of Fig. 3, one is the default version in MPICH which is labeled Original Send/Recv, the other is the optimized version which is labeled New Send/Recv. In the right of Fig. 3, we compare two implementations of Bcast which are labeled Original Bcast and New Bcast respectively. From Fig. 3, we can see that our new implementation outperforms the default implementation in MPICH. Our implementation has lower latency on various size message for both point-to-point communication and collective communication. Our implementation is almost six times faster than MPICH for short

706

Q. Miao et al.

Fig. 3. Left: Latency time of original MPICH Send/Receive (upper line) and optimized implementation (lower line). Right: Latency time of original MPICH Bcast (upper line) and optimized implementation (lower line).

message and up to fifteen times faster for long message. Our implementation gains great success for collective communication. The optimized implementation is about five times faster than MPICH for short message. It is two orders of magnitude faster than MPICH when the message size is larger than 1kBytes. For message size from 1 bytes to 4kBytes, the transmission time of broadcast is almost no increase in the new implementation. That’s because all other processes can directly copy data from the sender process’s user buffer simultaneously, and the transmission time is determined by the cost of memcpy() which is almost same for various short data sizes. As the message size is large, memory bandwidth contention will limit the communication performance, the transmission time increases with message size. When the message is so large that the system buffer pre-allocated for communication of the original MPI implementation can’t hold the total message at one time, it is required to split the message to several small pieces and transmits one piece at a time. We can see a step-up increase of the transmission time in MPICH from Fig. 3. However, all receivers can still directly copy from the sender’s buffer in the optimized implementation, since the optimized implementation isn’t limited on message size.

5

Conclusion

In this paper, we present an optimized implementation of the communication significantly improves the performance of MPI communication on shared memory system. The optimized methods include a single data copying protocol and a busy-waiting polling strategy for synchronization. Some experiments are conducted to evaluate the performance of a few basic MPI primitives with the optimized communication. The experimental results indicate that the primitives

Single Data Copying for MPI Communication Optimization

707

with optimized communication achieved lower latency than the native version in MPICH. For future work, we intend to use the methods to develop more real-world applications on shared memory system and make a source to source translator to translate the MPI programs written for distributed system into an efficient shared memory version. We will also compare the overall performance of an application written using optimized message passing with that of an application written using shared memory programming model such as OpenMP.

Acknowledgements This work is supported by the National Natural Science Foundation of China No.60533020. This work is partially finished at Intel China Research Center. We would like to thank the anonymous referees for their useful suggestions to improve the presentation of this paper.

References 1. The MPI Forum. The MPI Message-Passing Interface Standard. http://www.mpiforum.org/. (1995). 2. OpenMP Standards Board. OpenMP: A Proposed Industry Standard API for Shared Memory Programming. http://www.openmp.org/openmp/mpdocuments/paper/paper.html. (1997). 3. Pthread interface. ANSI/IEEE Standard 1003.1. (1996) 4. M. Greeger. Multicore CPUs for the Masses. ACM Queue, 3(7) (2005):63–64 5. W. Gropp, E. Lusk: Skjellum A, Doss N. MPICH: A high-performance, portable implementation for the MPI message-passing interface. Parallel Computing 1996; 22, (1996) pp. 789–928. 6. W. Gropp, E. Lusk: A high-performance MPI implementation on a shared-memory vector supercomputer. Parallel Computing , 22, (1997) pp. 1513–1526. 7. B. V. Protopopov, A. Skjellum: Shared-memory communication approaches for an MPI message-passing library. Concurrency: Practice and Experience, 12(9), (2000) pp. 799–820. 8. H. Tang, K. Shen, and T. Yang: Compile/Run-time Support for Threaded MPI Execution on Multiprogrammed Shared Memory Machines. In Proceedings of ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, (1999) pp. 107–118. 9. D. Buntinas, M. Guillaume, W. Gropp: Data Transfers between Processes in an SMP System: Performance Study and Application to MPI. International Conference on Parallel Processing (ICPP), Columbus, Ohio, USA, (2006), pp. 487–496.

Adaptive Sparse Grid Classification Using Grid Environments Dirk Pfl¨ uger, Ioan Lucian Muntean, and Hans-Joachim Bungartz Technische Universit¨ at M¨ unchen, Department of Informatics, Boltzmannstr. 3, 85748 Garching, Germany {pflueged, muntean, bungartz}@in.tum.de

Abstract. Common techniques tackling the task of classification in data mining employ ansatz functions associated to training data points to fit the data as well as possible. Instead, the feature space can be discretized and ansatz functions centered on grid points can be used. This allows for classification algorithms scaling only linearly in the number of training data points, enabling to learn from data sets with millions of data points. As the curse of dimensionality prohibits the use of standard grids, sparse grids have to be used. Adaptive sparse grids allow to get a trade-off between both worlds by refining in rough regions of the target function rather than in smooth ones. We present new results for some typical classification tasks and show first observations of dimension adaptivity. As the study of the critical parameters during development involves many computations for different parameter values, we used a grid environment which we present. Keywords: data mining, classification, adaptive sparse grids, grid environment.

1

Introduction

Today, an ever increasing amount of data is available in various fields such as medicine, e-commerce, or geology. Classification is a common task making use of previously known data and making predictions for new, yet unknown data. Efficient algorithms that can process vast datasets are sought for. The basics of sparse grid classification have already been described in [1,2], for example. Therefore, in this section, we summarize the main ideas only very briefly and refer to the references cited above for further information. We focus on binary classification. Given is a preclassified set of M data points for training, S = {(xi , yi ) ∈ [0, 1]d ×{−1, 1}}M i=1 , normalized to the d-dimensional unit hypercube. The aim is to compute a classifier f : [0, 1]d → {−1, 1} to obtain a prediction of the class −1 or +1 for previously unseen data points. To compute f , we follow the Regularization Network approach and minimize the functional H[f ] =

M 1  (yi − f (xi ))2 + λ∇f 2L2 , M i=1

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 708–715, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Adaptive Sparse Grid Classification Using Grid Environments

709

with the cost function (yi − f (xi ))2 ensuring good approximation of the training data by f and the regularization operator ∇f 2L2 guaranteeing that f is somehow smooth, which is necessary as the classifier should generalize from S. The regularization parameter λ stirs the trade-off between accuracy and smoothness. Rather than common algorithms which employ mostly global ansatz functions associated to data points, scaling typically quadratically or worse in M , we follow a somehow data-independent approach and discretize the feature space to obtain a classification algorithm that scales linearly in M : We restrict the problem to a finite dimensional space VN spanned by N basis functions φj to obtain our N classifier fN (x) = j=1 αj φj (x), in our case the space of d-linear functions. Minimization of H[f ] leads to a linear system with N unknowns,   (1) λM C + B · B T α = By, with Cij = (∇φi (x), ∇φj (x))L2 and Bij = φi (xj ). To counter the curse of dimensionality and to avoid N d unknowns in d dimensions, we use sparse grids, described for example in [3]. Regular sparse grids (1) Vn up to level n in each direction base on a hierarchical formulation of basis functions and an a priori selection of grid points – needing only O(N log(N )d−1 ) grid points with just slightly deteriorated accuracy. Sparse grids have been used for classification via the combination technique [1], where the sparse grid solution is approximated by a combination of solutions for multiple, but smaller and regular grids. Sparse grids have the nice property that they are inherently adaptive, which is what we will make use of in the following sections. Using sparse grids, the system of linear equations can be solved iteratively, each iteration scaling only linearly in the number of training data points and grid points, respectively. The underlying so-called UpDown algorithm which was shown in [2] bases on traversals of the tree of basis functions.

2

Grid-Based Development

For classification using regular sparse grids there are two important parameters determining the accuracy of the classifier to be learned. First, we have n, the maximum level of the grid. For low values of n there are usually not enough degrees of freedom for the classifier to be able to learn the underlying structure, whereas large n lead to overfitting, as each noisy point in the training data set can be learnt. Second and closely related, there is the regularization parameter λ, steering the trade-off between smoothness and approximation accuracy. Given enough basis functions, λ determines the degree of generalization. To find good values for these parameters, heuristics or experience from other problems can be used. For a fixed n the accuracy is maximized for a certain value of λ and decreases for higher or lower values, oscillating only little, for example. Especially during the development stage, heuristics are not sufficient; an optimal combination of both parameters for a certain parameter resolution is of interest. To make things even harder, systems under development are usually

710

D. P߬ uger, I.L. Muntean, and H.-J. Bungartz

not optimized for efficiency, but rather designed to be able to be flexible so that different approaches can be tested. Such parameter studies typically demand a significant computational effort, are not very time critical, but should be performed within a reasonable amount of time. This requires that sufficient computational power and/or storage space is available to the developer at the moment when needed. Grid environments grant users access to such resources. Additionally, they provide an easy access and therefore simplify the interaction of the users – in our setting the algorithm developers – with various resources [4]. In grid middleware, for example the Globus Toolkit [5], this is achieved through mechanisms such as single sign-on, delegation, or by providing extensive support for job processing. Parameter studies are currently among the most widespreaded applications of grid computing. For the current work, we used the grid research framework GridSFEA (Gridbased Simulation Framework for Engineering Applications) [6]. This research environment aims at providing various engineering simulation tasks, such as fluid dynamics, in a grid environment. GridSFEA uses the Globus Toolkit V4. It comprises both server-side components (grid tools and services) and client-side tools (such as an OGCE2 grid portal, using the portlet technology as a way to customize the access to the grid environment). The grid portal was extended by a portlet adapted to the requirements of sparse grid classification. Thus, support was introduced for sampling the parameter space, for automatic generation and submission of corresponding jobs, and for collecting and processing of results. Figure 1 shows a snapshot of our current portlet, running a sparse grid classification for various values of λ.

Fig. 1. Performing sparse grid parameter studies using a portlet of the GridSFEA portal

The usage of this grid portal enables the algorithm developer to focus on the design of the algorithm and the interpretation of the results, and to leave the management of the computations to the framework. Furthermore, the portal of GridSFEA allows the user to choose the grid environment to be used. This way, the computations – here parameter studies – can be performed at different computing sites, such as supercomputing centres.

Adaptive Sparse Grid Classification Using Grid Environments

3

711

Adaptivity and Results

The need for a classification algorithm that scales only linearly in the number of training data points forces us in the first place to trade the possibility to make use of the structure of the training data for a somehow data-independent discretization of the feature space. Therefore we have to deal with a higher number of basis functions than common classification algorithms which try to fit the data with as few basis functions as possible. Even though one iteration of a sparse grid solver scales only linearly in the number of unknowns, this still imposes restrictions on the maximum depth of the underlying grid. The idea of adaptive sparse grid classification is to obtain a trade-off between common, data-dependent algorithms and the data independence of sparse grid classification. The aim is to reduce the order of magnitude of unknowns while keeping the linear time training complexity: Especially those grid points should be invested that contribute most to the classification boundary, as the zerocrossing of the classifier determines the class prediction on new data. For the classification task, the exact magnitude of the approximation error is not important in regions that are purely positive or negative. It is reasonable to spend more effort in rough regions of the target function than in smooth ones. As we already showed in [2], special consideration has to be put on treating the boundary of the feature space. Normally, in sparse grid applications, grid points have to be invested on the boundary to allow for non-zero function values there. Employing adaptivity in classification allows us to neglect those grid points as long as it is guaranteed that no data points are located exactly on the boundary. This corresponds to the assumption that the data belonging to a certain class is somehow clustered together – and therefore can be separated from other data – which means that regions towards the border of the feature space usually belong to the same class. In regions where the classification boundary is close to the border of the feature space, adaptivity will take care of this by refining the grid where necessary and by creating basis functions that take the place of common boundary functions. Thus we normalize our data to fit in a domain which is slightly smaller than the d-dimensional unit hypercube, the domain of fN . This way we can start (1) the adaptive classification with the d-dimensional sparse grid for level 2, V2 , without the boundary and therefore only 2d + 1 grid points,  d−j than using d drather (1 + 2j) = grid points on the boundary, which would result in j=0 j 2 3d + 2d · 3d−1 ∈ O(d · 3d ) unknowns. For the remaining part of this section we proceed as follows: Starting with (1) V2 , we use a preconditioned Conjugated Gradient method for a few iterations to train our classifier. Out of all of the refinement candidates we choose the grid point with the highest surplus, add all the children to our current grid and repeat this until satisfied. One has to take care not to violate one of the basic properties of sparse grids: For each basis function all parents in the hierarchical tree of basis functions have to exist. All missing ancestors have to be created, which can be done recursively.

712

D. P߬ uger, I.L. Muntean, and H.-J. Bungartz

3.1

Classification Examples

The first example, taken from Ripley [7], is a two-dimensional artificial dataset which was constructed to contain 8% of noise. It consists of 250 data points for training and 1000 points to test on. Being neither linear separable nor very complicated, it shows typical characteristics of real world data sets. Looking for a good value of λ we evaluated our classifier after six refinement steps for λ = 10−i , i = 0 . . . 6, took the ones with the best and the two neighbouring accuracies, namely 0.01, 0.001 and 0.0001, and looked once more for a better value of lambda in between those by evaluating for λ = 0.01 − 0.001 ·i, i = 1 . . . 9 and λ = 0.001 − 0.0001 · i, i = 1 . . . 9. The best λ we got this way was 0.004. Figure 2 shows the training data (left) as well as the classification for λ = 0.004 and the underlying adaptive sparse grid after only 7 refinement steps (right). Even though there are only a few steps of refinement, it can clearly be observed that more grid points are spent in the region with the most noise and that regions with training data belonging to the same class are neglected. Note that one refinement step in x1 -direction (along x2 = 0.5) causes the creation of two children nodes in x2 -direction (at x2 = 0.25 and x2 = 0.75). 1

0.8

0.6

0.4

0.2

0

0.2

0.4

0.6

0.8

1

Fig. 2. Ripley dataset: training data and classification for λ = 0.004

Table 1 shows some results comparing different sparse grid techniques. For the combination technique and sparse grids with and without grid points on the boundary, we calculated the accuracy for level one to four and λ := 10−j − i · 10−j−1 , i = 0 . . . 8, j = 0 . . . 5, each. Increasing the level further does not improve the results as this quickly leads to overfitting. We show the number of unknowns for each grid, the best value of λ, and the best accuracy achieved on the test data. A general property of sparse grid classification can be observed: The higher the number of unknowns, the more important the smoothness functional gets and therefore the value for the regularization parameter λ increases. Using a coarser grid induces a higher degree of smoothness, which is a well-known phenomenon.

Adaptive Sparse Grid Classification Using Grid Environments

713

Table 1. Ripley dataset: accuracies [%] obtained for the combination technique, regular sparse grids with and without boundary functions, and the adaptive technique comb. techn. n |grids| λ 1 9 6 · 10−5 2 39 0.0005 3 109 0.005 4 271 0.007

sg boundary

acc. |grid| λ 90.3 9 6 · 10−5 90.8 21 0.0004 91.1 49 0.006 91.1 113 0.005

acc. 90.3 90.7 91.2 91.2

sg

adapt. sg λ = 0.004 |grid| λ acc. |grid| acc. 1* 50.0 5 89.9 5 0.0005 90.3 9 90.2 17 0.005 91.2 13 89.8 49 0.007 91.2 19 91.1 21 91.2 24 91.2 28 91.3 32 91.4 35 91.5

Another observation is that the assumption that grid points on the boundary can be neglected, which holds here even for regular grids. Of course, the sparse grid on level 1 can do nothing but guess one class value for all data points and five unknowns are not enough to achieve an accuracy of 90.7%, but already for level 3 the same accuracy as for the grid with boundary values is reached. For the adaptive sparse grid classification we show the results only for λ = 0.004 for eight times of refinement. With less grid points than the sparse grid with boundary on level 3, we achieved an excellent accuracy of 91.5% – 0.3% higher than what we were able to get using regular sparse grids. This is due to the fact that increasing a full level results in a better classification boundary in some parts, but leads to overfitting in other parts at the same time. Here, adaptivity helps to overcome this problem. We used our grid environment to compute and gather the results for these 180 runs for each level, even though the problem sizes are quite low: Our implementation of the combination technique has just been done for comparing it with the direct sparse grid technique, for example, and it is therefore far from being efficient. We neither use an efficient solver as a multigrid technique for example, nor care about the amount of memory we consume. We are just interested in getting eventually some results, a problem suited for grid environments. As the portlet mentioned in Sec. 2 allows us to easily specify a set of parameters, the adaptation for the use in GridSFEA was just a matter of minutes. A second, 6-dimensional dataset we used is a real-world dataset taken from [8] which contains medical data of 345 patients, mainly blood test values. The class value indicates whether a liver disease occurred or not. Because of the limited data available we did a stratified 10-fold cross-validation. To find a good value for λ we calculated the accuracy for different values of the regularization parameter as we did above for the regular sparse grids. This time we calculated the accuracy of the prediction on the 10-fold test data. Again we used GridSFEA to compute and gather the results. Theoretically one could even submit a job for each fold of the 10-fold cross-validation and gather the different accuracies

714

D. P߬ uger, I.L. Muntean, and H.-J. Bungartz

afterwards, but as the coefficients computed for the first fold can be used as a very good starting vector for the PCG in the other 9 folds, this was neglected. Table 2 shows some results, comparing the results of the adaptive technique (zero boundary conditions) with the best ones taken from [9], the combination technique (anisotropic grid, linear basis functions), and from [10], both linear and non-linear support vector machines (SVM). The adaptive sparse grid technique Table 2. Accuracies [%] (10-fold testing) for the Bupa liver dataset adapt. sg λ = 0.3 # refinements |grid| 7 485 8 863 9 1302

acc. 70.71 75.64 76.25

comb. techn. lin. anisotrop. acc. 73.9

SVM linear acc. 70.0

SVM non-linear acc. 73.7

was able to outperform the other techniques. For a relatively large value of λ and after only 9 refinement steps we got a 10-fold testing accuracy which is more than 2.3% higher than our reference results – with only 1302 unknowns. Again it proved to be useful to refine in the most critical regions while neglecting parts of the domain where the target function is smooth. 3.2

Dimension Adaptivity

Similar observations can be made for a third dataset, the Pima diabetes dataset, taken again from [8]. 769 women of Pima Indian heritage living near Phoenix were tested for diabetes mellitus. 8 features describe the results of different examinations, such as blood pressure measurements. For reasons of shortness we will focus only on some observations of the dimension adaptivity of the adaptive sparse grid technique. As assumed, the adaptive refinement neglects dimensions containing no information but only noise quite well. Extending the diabetes dataset for example by one additional feature with all values set to 0.5 leads to two more grid points (1) for V2 . As the two additional surpluses are close to zero, the effects on BB T and By are just minor, see (1). But extending the dimensionality modifies the smoothness functional. There are stronger impacts on C and therefore the tradeoff between smoothness and approximation error changes. One could expect that the same effects could be produced by changing the value of λ suitably. And in fact, for training on the first 567 instances and testing on the remaining 192 data points, almost identical accuracies (about 74.5%) can be achieved when changing λ to 0.0002 compared to extending the dimensionality by one for λ = 0.001 – at least for the first few refinements. Further improvement could be achieved by omitting the “weakest” dimension, the one with the lowest surpluses, during refinement. When some attributes are expected to be less important than others, this could lead to further improvements considering the number of unknowns needed.

Adaptive Sparse Grid Classification Using Grid Environments

4

715

Summary

We showed that adaptive sparse grid classification is not only possible, but useful. It makes use of both worlds: the data-independent and linear runtime world and the data-dependent, non-linear one that reduces the number of unknowns. If increasing the level of a regular sparse grid leads to overfitting in some regions but improves the quality of the classifier in others, adaptivity can take care of this by refining just in rough regions of the target function. In comparison to regular sparse grids, the adaptive ones need far less unknowns. This can reduce the computational costs significantly. Considering dimension adaptivity we pointed out first observations. Further research has to be invested here. Finally we demonstrated the use of GridFSEA, our grid framework, for parameter studies, especially during algorithm development stage. Theoretically, even high dimensional classification problems could be tackled by adaptive sparse grid classification as we showed that we can start with a number of grid points which is linear in the dimensionality. Practically, our current smoothness functional permits high dimensionalities by introducing a factor of 2d in the number of operations. Therefore we intend to investigate the use of alternative smoothness functionals in the near future to avoid this.

References 1. Garcke, J., Griebel, M., Thess, M.: Data mining with sparse grids. Computing 67(3) (2001) 225–253 2. Bungartz, H.J., Pfl¨ uger, D., Zimmer, S.: Adaptive Sparse Grid Techniques for Data Mining. In: Modelling, Simulation and Optimization of Complex Processes, Proc. Int. Conf. HPSC, Hanoi, Vietnam, Springer (2007) to appear. 3. Bungartz, H.J., Griebel, M.: Sparse grids. Acta Numerica 13 (2004) 147–269 4. Foster, I., Kesselman, C.: The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann Publishers (2005) 5. Foster, I.: Globus toolkit version 4: Software for service-oriented systems. In: IFIP International Conference on Network and Parallel Computing. Volume 3779 of LNCS., Springer-Verlag (2005) 2–13 6. Muntean, I.L., Mundani, R.P.: GridSFEA - Grid-based Simulation Framework for Engineering Applications. http://www5.in.tum.de/forschung/grid/gridsfea (2007) 7. Ripley, B.D., Hjort, N.L.: Pattern Recognition and Neural Networks. Cambridge University Press, New York, NY, USA (1995) 8. Newman, D., Hettich, S., Blake, C., Merz, C.: UCI repository of machine learning databases. http://www.ics.uci.edu/∼ mlearn/MLRepository.html (1998) 9. Garcke, J., Griebel, M.: Classification with sparse grids using simplicial basis functions. Intelligent Data Analysis 6(6) (2002) 483–502 10. Fung, G., Mangasarian, O.L.: Proximal support vector machine classifiers. In: KDD ’01: Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, New York, NY, USA, ACM Press (2001) 77–86

Latency-Optimized Parallelization of the FMM Near-Field Computations Ivo Kabadshow1 and Bruno Lang2 John von Neumann Institute for Computing, Central Institute for Applied Mathematics, Research Centre J¨ ulich, Germany [email protected] 2 Applied Computer Science and Scientific Computing Group, Department of Mathematics, University of Wuppertal, Germany 1

Abstract. In this paper we present a new parallelization scheme for the FMM near-field. The parallelization is based on the Global Arrays Toolkit and uses one-sided communication with overlapping. It employs a purely static load-balancing approach to minimize the number of communication steps and benefits from a maximum utilization of data locality. In contrast to other implementations the communication is initiated by the process owning the data via a put call, not the process receiving the data (via a get call).

1

Introduction

The simulation of particle systems is a central problem in computational physics. If the interaction between these particles is described using an electrostatic or gravitational potential ∼ 1/r, the accurate solution poses several problems. A straightforward computation of all pairwise interactions has the complexity O(N 2 ). The Fast Multipole Method (FMM) developed by Greengard and Rokhlin [1] reduces the complexity to O(N ). A detailed depiction of the FMM would be beyond the scope of this paper and can be found elsewhere [2]. We will only outline the most important details for the parallelization. The FMM proceeds in five passes. – – – – – –

Sort all particles into boxes. Pass 1: Calculation and shifting of multipole moments. Pass 2: Transforming multipole moments. Pass 3: Shifting Taylor-like coefficients. Pass 4: Calculation of the far-field energy. Pass 5: Calculation of the near-field energy.

The most time-consuming parts of the algorithm are Pass 2 and Pass 5, each contributing approximately 45% to the overall computing time. Since we have different calculation schemes for Pass 2 and 5, different parallelization schemes are necessary. This paper deals with Pass 5 only, the direct (near-field) pairwise interaction. The sequential version of the near-field computation involves the following steps. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 716–722, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Latency-Optimized Parallelization of the FMM Near-Field Computations

717

– Create the particle–box relation ibox in skip-vector form (see Sec. 3.1). – Calculate all interactions of particles contained in the same box i (routine pass5inbox). – Calculate all interactions of particles contained in different boxes i and j (routine pass5bibj).

2

Constraints for the Parallelization

In order to match the parallelization of the remaining FMM passes and to achieve reasonable efficiency, the following constraints had to be met. 1. Use the Global Arrays Toolkit (GA) for storing global data to preserve the global view of the data. GA provides a parallel programming model with a global view of data structures. GA uses the ARMCI communication library offering one-sided and non-blocking communication. A detailed description of all GA features can be found in Ref. [4]. ARMCI is available on a wide range of architectures. Additionally GA comes with its own memory allocator (MA). To use the available memory efficiently, our implementation will allocate all large local scratch arrays dynamically by MA routines instead of direct Fortran allocation. 2. Minimize the number of communication steps. All tests were performed on the J¨ ulich Multi Processor (JUMP); a SMP cluster with 41 nodes with 32 IBM Power 4+ processors and 128 GB RAM per node. The nodes are connected through a high performance switch with peak bandwidth of 1.4GB/s and measured average latencies of ≈ 30μs. Shared memory can be accessed inside a node at the cost of ≈ 3μs. Ignoring the latency issue on such machines can have a dramatic impact on the efficiency; see Fig. 1 3. To reduce the communication costs further, try to overlap communication and calculation and use one-sided communication to receive (ga get) and send (ga put) data.

3

Implementation

In this section we describe some details of the parallel implementation. Related work and other implementation schemes can be found elsewhere [7,8,9]. 3.1

Initial Data Distribution

The most memory consuming data arrays are the Cartesian coordinates (xyz), the charges (q), the particle–box relation stored in the ibox vector, and two output arrays storing the data for the potential (fmmpot) and the gradient (fmmgrad). These arrays have to be distributed to prevent redundancy. The ibox vector is a special auxiliary data structure mapping all charges to the corresponding boxes. To enable fast access in a box-wise search this structure is stored in a skip-vector form (see Fig.2).

718

I. Kabadshow and B. Lang

224

Total speedup Ideal speedup

192

Speedup

160 128 96 64 32 0

32

64

128 96 160 Number of processors

192

224

256

Fig. 1. This figure highlights the latency bottleneck. The speedup for a system with 87 ≈ 2 · 106 homogeneously distributed particles is shown. Communication is implemented using blocking communication. Each box is fetched separately. The latency switching from 3μs to 30μs can be clearly seen at 32 processors.

3 3

3

4 4

4

4

6 6

6

6

7

7

9 9

9

9

13 13 13 13

-3 -2 -3 4 -3 -2 -3 6 -3 -2 -3 7 -1 9 -3 -2 -3 13 -4 -3 -2

Fig. 2. The diagram illustrates the box management. Empty boxes (5,8,10,11,12) are not stored. The ibox vector associates the box number with each particle. To allow fast access to neighboring boxes the ibox vector is modified into a skipped form. Only the first particle in a box holds the box number, all subsequent charges link forward to the first particle of the next box. Finally, the last particle in a box links back to the first particle of the same box.

3.2

Design Steps

The parallel algorithm can be divided into 5 steps: – Align boxes on processors – Prepare and initiate communication – Compute local contributions

Latency-Optimized Parallelization of the FMM Near-Field Computations

719

– Compute remote contributions – Further communication and computation steps, if small communication buffers make them necessary. Step 1 - Box Alignment. In Passes 1 to 4 of the FMM scheme the particles of some boxes may be stored in more than one processor. These boxes have to be aligned to only one processor. While this guarantees that the Pass 5 subroutine pass5inbox can operate locally, it may introduce load imbalance. However, assuming a homogeneous particle distribution, the load-imbalance will be very small, since the workload for one single box is very small. The alignment can be done as follows: – Compute size and indices of the “leftmost” and “rightmost” boxes (w.r.t. Morton ordering; cf. Fig. 3) and store this information in a global array of size O(nprocs). – Gather this array into a local array boxinfo containing information from each processor. – Assign every box such that the processor owning the larger part gets the whole box. – Reshape the irregular global arrays ibox, xyz, q, fmmgrad, fmmpot. – Update boxinfo. After alignment the pass5inbox subroutine can be called locally.

P1

P2

P3

P4

P1

P1

P2

Fig. 3. The data is stored along a Morton-ordered three-dimensional space filling curve (SFC)[6]. The diagram shows a 2D Morton-ordered SFC for the sequential version and a two and four processor parallel version. Since the data is distributed homogeneously the particle/box split is exactly as shown here.

Step 2 - Prepare and Initiate Communication. Like in the sequential program, each processor determines the neighbors of the boxes it holds, i.e. the boxes which lie geometrically above, to the right or in front of the respective box. All other boxes do not need to be considered, since these interactions were already taken into account earlier. This is a special feature of the Morton-ordered

720

I. Kabadshow and B. Lang

data structure. Boxes not available locally will be sent by the owner of the box to assure a minimal amount of communication steps. The remote boxes are stored in a local buffer. If the buffer is too small to hold all neighboring boxes, all considered interactions will be postponed until the buffer can be reused. In order to achieve the O(nprocs) latency, foreign boxes are put by the processor that holds them into the local memory of the processor requiring them. Each processor has to perform the following steps: – – – – – –

Determine local boxes that are needed by another processor. Determine the processors they reside on. Compute the total number of ‘items‘ that every processor will receive. Check local buffer space against total size. If possible, create a sufficiently large send buffer. Issue a ga_put() command to initiate the data transfer.

Step 3 - Compute Local Contributions. Now, that the communication is started, the local portion of the data can be used for computation. All in-box and box–box interactions that are available locally are computed. All computations involving remote/buffered boxes are deferred to Steps 4/5. Step 3 comprises the the calculations of the total Coulomb energy Ec , the Coulomb forces F c (r k ) and the Coulomb potential φc (r k ). Let ni denote the number of particles in box i, let (i, k) be the kth particle in box i, and let qi,k and r i,k be its charge and position, respectively. Then the total Coulomb energy between all particles in boxes i and j, the total Coulomb forces of all particles in box j to a certain particle (i, k) in box i, and the corresponding Coulomb potential are given by i  qi,k qj,l 1 2 |r i,k − r j,l |

n

Ec =

nj

(1)

k=1 l=1 nj

F c (r i,k ) = qi,k

 l=1

φc (r i,k ) =

nj  l=1

qj,l (r i,k − r j,l ) (r i,k − rj,l )3

qj,l |r i,k − r j,l |

(2)

(3)

respectively. Note that for the in-box case, i = j, the terms corresponding to the interaction of a particle with itself must be dropped. Step 4/5 - Compute Remote Contributions. Since the communication request to fill the buffer was issued before the start of step 3, all communication should be finished by now. After a global synchronization call ga_sync, the buffer is finally used to calculate interactions with the remote/buffered boxes.

4

Results

Three different test cases were studied. All test cases contain equally distributed particles (each box contains 8 particles). This guarantees that all boxes are occupied with particles and therefore all possible communication at the processors’

Latency-Optimized Parallelization of the FMM Near-Field Computations

721

512 Ideal speedup 8^7 particles 8^8 particles 8^9 particles

256 128

Speedup

64 32 16 8 4 2 1

1

2

4

8

32 16 Number of processors

64

128

256

512

Fig. 4. Three different test cases were computed. The speedup is almost independent from the number of particles even for larger processor numbers. Thus, especially the smallest test case with 87 particles benefits from the reduced communication steps.

border will indeed take place. This represents a worst case communication pattern for any number of processors since empty boxes would not be communicated over the network. The key features can be summarized as follows: All expensive operations (box search, calculation) are done locally. Especially gathering all needed data for each processor is done by the processor owning the data (ga_put), not by the processor receiving the data (ga_get). This saves unnecessary communication time and thus latency. Even for small test cases with few particles per processor the communication can be hidden behind the calculation. The static load-balancing approach outperforms the dynamic approach especially for clusters with long latency times. Compared to the model described in Sec. 1, this approach scales well beyond 32 processors. There is no visible break in the speedup at 32 processors.

5

Outlook

The presented scheme assumes an approximately homogeneous distribution of the particles, but can be extended to handle inhomogeneous distributions by introducing “splitting boxes”, i.e., dividing large boxes further until a certain box granularity is reached; hence, the workload can again be distributed equally over all processors.

722

I. Kabadshow and B. Lang

Table 1. Computation times in seconds/speedups for three different particle systems. For 87 particles calculation was done with processor numbers up to 128, since the total computation time was already below 0.25 seconds and hence results with more processors would suffer from the low measurement precision. The upper limit of 512 processors was due to limitations in our GA implementation. procs 1 2 4 8 16 32 64 128 256 512

87 particles

88 particles

89 particles

26.78 12.51 6.43 3.33 1.76 1.11 0.45 0.21 — —

211.74 107.92 59.19 26.44 14.03 8.25 3.89 2.13 1.14 0.52

1654.13 869.53 409.70 218.22 105.01 57.26 28.59 15.23 8.41 3.96

— 2.14 4.16 8.04 15.22 24.13 59.51 127.52 — —

— 1.96 3.58 8.01 15.09 25.67 54.43 99.41 185.74 407.19

— 1.90 4.04 7.58 15.75 28.89 57.86 108.61 196.69 417.71

Acknowledgements The authors acknowledge the support by H. Dachsel for his sequential FMM code, as well as B. Kuehnel for his work on the parallel code as a guest student at research centre J¨ ulich.

References 1. L. Greengard and V. Rokhlin: A fast algorithm for particle simulations. J. Comput. Phys. 73, No.2 (1987) 325–348 2. C. A. White and M. Head-Gordon: Derivation and efficient implementation of the fast multipole method. J. Chem. Phys. 101 (1994) 6593–6605 3. H. Dachsel: An error-controlled Fast Multipole Method. (in preparation) 4. J. Nieplocha, B. Palmer, V. Tipparaju, M. Krishnan, H. Trease and E. Apra: Advances, Applications and Performance of the Global Arrays Shared Memory Programming Toolkit. IJHPCA, 20, No. 2, (2006) 203–231 5. J. Nieplocha, V. Tipparaju, M. Krishnan, and D. Panda: High Performance Remote Memory Access Communications: The ARMCI Approach. IJHPCA, 20, No. 2, (2006) 233–253 6. M.F. Mokbel, W.G. Aref and I. Kamel: Analysis of multi-dimensional space-filling curves. Geoinformatica 7, No.3 (2003) 179–209 7. L. Greengard and W.D. Gopp: A parallel version of the multipole method. Computers Math. Applic. 20, No. 7 (1990) 63–71 8. J. Kurzak and B.M. Pettitt: Communications overlapping in fast multipole particle dynamics methods. J. Comput. Phys. 203 (2005) 731–743 9. J. Kurzak and B.M. Pettitt: Massively parallel implementation of a fast multipole method for distributed memory machines. J. Par. Dist. Comp. 65 (2005) 870–881

Efficient Generation of Parallel Quasirandom Faure Sequences Via Scrambling Hongmei Chi1,2 and Michael Mascagni1,3 School of Computational Science, Florida State University, Tallahassee, FL 32306-4120, USA [email protected] Department of Computer and Information Sciences, Florida A& M University, Tallahassee, FL 32307-5100, USA 3 Department of Computer Science, Florida State University, Tallahassee, FL 32306-4530, USA 1

2

Abstract. Much of the recent work on parallelizing quasi-Monte Carlo methods has been aimed at splitting a quasirandom sequence into many subsequences which are then used independently on the various parallel processes. This method works well for the parallelization of pseudorandom numbers, but due to the nature of quality in quasirandom numbers, this technique has many drawbacks. In contrast, this paper proposes an alternative approach for generating parallel quasirandom sequences via scrambling. The exact meaning of the digit scrambling we use depends on the mathematical details of the quasirandom number sequence’s method of generation. The Faure sequence is scramble by modifying the generator matrices in the definition. Thus, we not only obtain the expected near-perfect speedup of the naturally parallel Monte Carlo methods, but the errors in the parallel computation is even smaller than if the computation were done with the same quantity of quasirandom numbers using the original Faure sequence. Keywords: Parallel computing, Faure sequence, quasi-Monte Carlo, scrambling, optimal sequences.

1

Introduction

One major advantage of Monte Carlo methods is that they are usually very easy to parallelize; this leads us to all Monte Carlo methods “naturally parallel” algorithms. This is, in principal, also true of quasi-Monte Carlo (QMC) methods. As with ordinary Monte Carlo, QMC applications have high degrees of parallelism, can tolerate large latencies, and usually require considerable computational effort, making them extremely well suited for parallel, distributed, and even Grid-based computational environments. Parallel computations using QMC require a source of quasirandom sequences, which are distributed among the individual parallel processes. In these environments, a large QMC problem can be broken up into many small subproblems. These subproblems are then scheduled Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 723–730, 2007. c Springer-Verlag Berlin Heidelberg 2007 

724

H. Chi and M. Mascagni

on the parallel, distributed, or Grid-based environment. In a more traditional instantiation, these environments are usually a workstation cluster connected by a local-area network, where the computational workload is carefully distributed. Recently, peer-to-peer and Grid-based computing, the cooperative use of geographically distributed resources unified to act as a single powerful computer, has been investigated as an appropriate computational environment for Monte Carlo applications [9,10]. QMC methods can significantly increase the accuracy of the likelihood-estimated over regular MC [7]. In addition, QMC can improve the convergence rate of traditional Markov chain Monte Carlo [16]. This paper explores an approach to generating parallel quasirandom sequences for parallel, distributed, and Grid-based computing. Like pseudorandom numbers, quasirandom sequences are deterministically generated, but, in contrast, are constructed to be highly uniformly distributed. The high level of uniformity is a global property of these sequences, and something as innocent sounding as the deletion of a single quasirandom point from a sequence can harm its uniformity quite drastically. The measure of uniformity used traditionally in quasirandom number is the star-discrepancy. The reason for this is the Koksma-Hlawka inequality. This is the fundamental result in QMC theory, and motivates the search and study of low-discrepancy sequences. This result states that for any sequence X = {x0 , . . . , xN −1 } and any function, f , with bounded variation defined on the s-dimensional unit cube, I s , the integration error over I d is bounded as,  N  1  ∗ f (x) dx − f (xi ) ≤ V [f ]DN , N i=1 Is

   

(1)

∗ 1 where V [f ] is the total variation of f , in the sense of Hardy-Krause, DN is the star discrepancy of sequence X [20]. As we are normally given a problem to solve, ∗ that we can control. This is done by constructing low-discrepancy it is only DN sequences for use in QMC. However, the successful parallel implementation of a quasi-Monte Carlo application depends crucially on various quality aspects of the parallel quasirandom sequences used. Randomness can be brought to bear on otherwise deterministic quasirandom sequence by using various scrambling techniques. These methods randomize quasirandom sequences by using pseudorandom numbers to scramble the order the of quasirandom numbers generated or to permute their digits. Thus by the term “scrambling” we are referring more generally to the randomization of quasirandom numbers. Scrambling provides a natural way to parallelize quasirandom sequences, because scrambled quasirandom sequences form a stochastic family which can be assigned to different processes in a parallel computation. In 1

For a sequence of N points X = {x0 , . . ., xN−1 } in the d-dimensional unit cube I s , and for any box, J, with one corner at the origin in I s , the star discrepancy, in J ∗ ∗ , is defined as DN = supJ ∈I s |μx (J) − μ(J)|, where μX (J) = #of points is DN N the discrete measure of J, i. e., the fraction of points of X in J, and μ(J) is the Lebesgue measure of J, i. e., the volume of J.

Efficient Generation of Parallel Quasirandom Faure Sequences

725

addition, there are many scrambling schemes that produce scrambled sequences ∗ with the same DN as the parent, which means that we expect no degradation of results with these sorts of scrambling. These scrambling schemes are different from other proposed QMC parallelization schemes such as the leap-frog scheme [2] and the blocking scheme [18], which split up a single quasirandom sequence into subsequences. The Faure sequence is one of the most widely used quasirandom sequences. The original construction of quasirandom sequences was related to the van der Corput sequence, which itself is a one-dimension quasirandom sequence based on digital inversion. This digital inversion method is a central idea behind the construction of many current quasirandom sequences in arbitrary bases and dimensions. Following the construction of the van der Corput sequence, a significant generalization of this method was proposed by Faure [4] to the sequences that now bear his name. In addition, an efficient implementation of the Faure sequence was published shortly afterwards [6]. Later, Tezuka [20] proposed a generalized Faure sequence, GFaure, which forms a family of randomized Faure sequences. We will discuss the scrambling of Faure sequences in this paper. The organization of this paper is as follows. In §2, an overview of scrambling methods and a brief introduction to the theory of constructing quasirandom sequences is given. Parallelizations and implementations of quasirandom Faure sequences are presented in §3. The consequences of choosing a generator matrix, the resulting computational issues, and a some numerical results are illustrated in §4, and conclusions are presented in §5.

2

Scrambling

Before we begin our discussion of the scrambled Faure sequence, it behooves us to describe, in detail, the standard and widely accepted methods of Faure sequence generation. The reason for this is that construction of Faure sequence follows from the generation of the Van der Corput and Sobo´l sequences. Often QMC scrambling methods are combined with the original quasirandom number generation algorithms. The construction of quasirandom sequences is based on finite-field arithmetic. For example, the Sobo´l sequences [19] are constructed using linear recurring relations over F2 . Faure used the Pascal matrix in his construction, and Niederreiter used the formal Laurent series over finite fields to construct low-discrepancy sequences that now bear his name. We now briefly describe the construction of the above “classical” quasirandom sequences. – Van der Corput sequences: Let b ≥ 2 be an integer, and n a non-negative integer with n = n0 + n1 b + ... + nm bm it b-adic representation. Then the m . Here nth term of the Van der Corput sequence is φb (n) = nb0 + nb21 + ... + nbm φb (n) is the radical inverse function in base b and and n = (n0 , n1 , ..., nm )T

726

H. Chi and M. Mascagni

is the digit vector of the b-adic representation of n. φb (·) simply reverses the digit expansion of n, and places it to the right of the “decimal” point. The Van der Corput sequence in s-dimensions, more commonly called the Halton sequence, is one of the most basic one-dimension quasirandom sequences, and can be rewritten in the following form: (φb1 (Cn), φb2 (Cn)..., φbs (Cn)) Here the “generator matrix” C is the identity matrix and the nth Halton sequence is defined as (φb1 (n), φb2 (n)..., φbs (n)) where the bases, b1 , b2 , ..., bs , are pairwise coprime. The other commonly uses quasirandom sequences can be similarly defined by specifying different generator matrices. – Faure and GFaure sequences: The nth element of the Faure sequence is expressed as (φb (P 0 n), φb (P 1 n), ..., φb (P s−1 n)), where b is prime and b ≥ s and P is Pascal matrix whose (i, j) element   i−1 is equal to Tezuka [22] proposed the generalized Faure sequence, j−1 GFaure, with the jth dimension generator matrix C (j) = A(j) P j−1 , where A(j) are a random nonsingular lower triangular matrices. Faure [5] extended this idea to bigger set. Also a subset of GFaure is called GFaure with the i-binomial property [21], where A(j) is defined as: ⎛ ⎞ h1 0 0 0 ... ⎜ g2 h1 0 0 ...⎟ ⎜ ⎟ ⎜ g3 g2 h1 0 ...⎟ ⎜ ⎟ ⎟ A(j) = ⎜ ⎜ g4 g3 g2 h1 ...⎟ , ⎜ · · · · ...⎟ ⎜ ⎟ ⎝ · · · · ...⎠ · · · · ... where h1 is uniformly distributed on Fb − {0} and gi is uniformly distributed on Fb . For each A(j) , there will be a different random matrix in the above form. We will generate parallel Faure sequences by randomizing the matrix A(j) . However, arbitrary choice of matrix could lead to high correlation between different quasirandom streams. We will address this concern in next section and show how to choose the matrix A(j) so that correlation between streams is minimized.

3

Parallelization and Implementations

Much of the recent work on parallelizing quasi-Monte Carlo methods has been aimed at splitting a quasirandom sequence into many subsequences which are then used independently on the various parallel processes. This method works

Efficient Generation of Parallel Quasirandom Faure Sequences

727

well for the parallelization of pseudorandom numbers, but due to the nature of quality in quasirandom numbers, this technique has many drawbacks. In contrast, this paper proposes an alternative approach for generating parallel quasirandom sequences. Here we take a single quasirandom sequence and provide different random digit scramblings of the given sequence. If the scrambling preserves certain equidistribution properties of the parent sequence, then the result will be high-quality quasirandom numbers for each parallel process, and an overall successful parallel quasi-Monte Carlo computation as expected from the Koksma-Hlawka inequality [11]. Parallelization via splitting [18] uses a single quasirandom sequence and assigns subsequences of this quasirandom sequence to different processes. The idea behind splitting is to assume that any subsequence of a given quasirandom sequence has the same uniformity as the parent quasirandom sequence. This is an assumption that is often false [8]. In comparison to splitting, each scrambled sequence can be thought of as an independent sequence and thus assigned to a different process, and under certain circumstances it can be proven that the scrambled sequences are as uniform as the parent [15]. Since the quality (small discrepancy) of quasirandom sequences is a collective property of the entire sequence, forming new sequences from parts is potentially troublesome. Therefore, scrambled quasirandom sequences provide a very appealing alternative for parallel quasirandom sequences, especially where a single quasirandom sequence is scrambled to provide all the parallel streams. Such a scheme would also be very useful for providing QMC support to the computational Grid [9,10]. We will consider a variety of generator matrices for parallelizing and implementing parallel Faure sequences. The generation of the original Faure sequences is fast and easy to implement. Here, we applied the same methods as was used in Halton sequences because the original Halton sequence suffers from correlations [3] between radical inverse functions with different bases used for different dimensions. These correlations result in poorly distributed two-dimensional projections. A standard solution to this phenomenon is to use a randomized (scrambled) version of the Halton sequence. We proposed a similar algorithm to improve quality of the Faure sequences. We will use the the same approach here to randomize the generator matrix and thus generate parallel Faure sequences. The strategy used in [3] is to find an optimal hi and gi in base b to improve the quality of the Faure sequence. Most importantly, the two-most significant digits of each Faure point are only scrambled by h1 and g2 . Thus the choice of these two elements is crucial for producing good scrambled Faure sequences. After these choices, the rest of elements in matrix A(j) could be chosen randomly. Whenever scrambled methods are applied, pseudorandom numbers are the “scramblers”. Therefore, it is important to find a good pseudorandom number generator (PRNG) to act as a scrambler so that we can obtain well scrambled quasirandom sequences. A good parallel pseudorandom number generator such as SPRNG [13,12] is chosen as our scrambler.2 2

The SPRNG library can be downloaded at http://sprng.fsu.edu.

728

4

H. Chi and M. Mascagni

Applications

FINDER [17], a commercial software system which uses quasirandom sequences to solve problems in finance, is an example of the successful use of GFaure, as a modified GFaure sequence is included in FINDER, where matrices A(j) are empirically chosen to provide high-quality sequences. Assume that an integrand, f , is defined over the s-dimensional unit cube, [0, 1)s , and that I(f ) is defined as: I(f ) = I s f (x) dx. Then the s-dimensional integral, I(f ), in Equation (2) may be approximated by QN (f ) [14]: QN (f ) = N s  i=1 ωi f (xi ), where xi is in [0, 1) , and the ωi s are weights. If {x1 , . . . , xN } is 1 chosen randomly, and ωi = N , then QN (f ) becomes the standard Monte Carlo integral estimate, whose statistical error can be estimated using the Central Limit Theorem. If {x1 , . . . , xN } are a set of quasirandom numbers, then QN (f ) is a quasi-Monte Carlo estimate and the above mentioned Koksma-Hlawka inequality can be appealed to for error bounds. To empirically test the quality of our proposed parallel Faure sequence, we evaluate the test integral discussed in [6] with ai = 1  1  1 s |4xi − 2| + ai ... Πi=1 dx1 . . . dxs = 1. (2) 1 + ai 0 0 Table 1. Estimation of The Integral r 10 10 10 10 10 10 10

N 1000 3000 5000 7000 30000 40000 50000

s = 13 0.125 0.908 0.912 0.943 0.988 1.014 1.006

Ê1 0

...

s = 20 0.399 0.869 0.985 1.014 1.097 1.118 1.016

Ê1 0

|4xi −2|+1 s Πi=1 dx1 . . . dxs = 1 2

s = 25 0.388 0.769 1.979 1.342 1.171 1.181 1.089

s = 40 0.699 0.515 0.419 0.489 1.206 1.118 1.034

In Table 1, we presented the average of ten (r = 10) Faure parallel streams for computing the numerical values of integral. The accuracy of quasi-Monte Carlo integration depends not simply on the dimension of the integrands, but on their effective dimension [23]. It is instructive to publish the results of these numerical integrations when we instead use the original Faure sequence to provide the same number of points as were used in the above computations. Table 2 shows these results. The astonishing fact is that the quality numerical integration using the original is lower than the parallel scrambled Faure sequence. We do not report speedup results, as this is a naturally parallel algorithm, but we feel it important to stress that using scrambling in QMC for parallelization has a very interesting consequence: the quality of the parallel sequences is actually better than the original sequences. The authors are not aware of similar situations,

Efficient Generation of Parallel Quasirandom Faure Sequences Table 2. Estimation of The Integral r 10 10 10 10 10 10 10

N 1000 3000 5000 7000 30000 40000 50000

s = 13 0.412 1.208 0.919 1.043 0.987 1.010 0.996

Ê1 0

...

s = 20 0.456 0.679 0.976 1.214 1.097 1.103 1.006

Ê1 0

729

|4xi −2|+1 s Πi=1 dx1 . . . dxs = 1 2

s = 25 1.300 0.955 1.066 0.958 0.989 1.201 1.178

s = 40 0.677 0.775 0.871 0.916 1.026 0.886 0.791

where parallelization actually increases the quality of a computation, but are interested to learn about more examples in the literature.

5

Conclusions

A new scheme for parallel QMC streams using different GFaure sequences is proposed. The advantage of this scheme is that we can provide unlimited independent streams for QMC in heterogeneous computing environment. This scheme is an alternative for generating parallel quasirandom number streams. The obtained are very interesting as the scrambled versions used in individual processes are of higher quality than the original Faure sequence. We need to carry out a big, yet feasible, computations that will provide the data required for a parallel generator based on these ideas. More Numerical experiments, such as application in bioinformatics [1], need to done to further validate this parallel methods. However, this parallelization has a very interesting property. The parallel version of the quasirandom numbers are of better quality than the original sequence. This is due to the fact that the scrambling done for parallelization also increases the quality of the sequences. Thus, not only do we have a faster parallel computation, but the parallel computation is simultaneous more accurate without any extra computational effort.

References 1. P. Beerli and H. Chi. Quasi-markov chain Monte Carlo to improve inference of population genetic parameters. Mathematics and Computers in Simulation, page In press, 2007. 2. B.C. Bromley. Quasirandom number generators for parallel Monte Carlo algorithms. Journal of Parallel and Distributed Computing, 38(1):101–104, 1996. 3. H. Chi, M. Mascagni, and T. Warnock. On the optimal Halton sequences. Mathematics and Computers in Simulation, 70/1:9–21, 2005. 4. H. Faure. Discrepancy of sequences associated with a number system(in dimension s). Acta. Arith, 41(4):337–351, 1982[French]. 5. H. Faure. Variations on(0,s)-sequences. Journal of Complexity, 17(4):741–753, 2001.

730

H. Chi and M. Mascagni

6. B. Fox. Implementation and relative efficiency of quasirandom sequence generators. ACM Trans. on Mathematical Software, 12:362–376, 1986. 7. W. Jank. Efficient simulated maximum likelihood with an application to online retailing. Statistics and Computing, 16:111–124, 2006. 8. L. Kocis and W. Whiten. Computational investigations of low discrepancy sequences. ACM Trans. Mathematical software, 23:266–294, 1997. 9. Y. Li and M. Mascagni. Analysis of large-scale grid-based Monte Carlo applications. International Journal of High Performance Computing Applications, 17(4):369–382, 2003. 10. K. Liu and F. J. Hickernell. A scalable low discrepancy point generator for parallel computing. In Lecture Notes in Computer Science 3358, volume 3358, pages 257– 262, 2004. 11. W. L. Loh. On the asymptotic distribution of scrambled net quadrature. Annals of Statistics, 31:1282–1324, 2003. 12. M. Mascagni and H. Chi. Parallel linear congruential generators with SophieGermain moduli. Parallel Computing, 30:1217–1231, 2004. 13. M. Mascagni and A. Srinivasan. Algorithm 806: SPRNG: A scalable library for pseudorandom number generation. ACM Transactions on Mathematical Software, 26:436–461, 2000. 14. H. Niederreiter. Random number generation and quasi-Monte Carlo methods. SIAM, Philadephia, 1992. 15. A. B. Owen. Randomly permuted (t, m, s)-nets and (t, s)-sequences. Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, 106 in Lecture Notes in Statistics:299–317, 1995. 16. A. B. Owen and S. Tribble. A quasi-Monte Carlo metroplis algorithm. Proceedings of the National Academy of Sciences of the United States of America, 102:8844– 8849, 2005. 17. A. Papageorgiou and J. Traub. Beating Monte Carlo. RISK, 9:63–65, 1997. 18. W. Schmid and A. Uhl. Techniques for parallel quasi-Monte Carlo integration with digital sequences and associated problems. Math. Comput. Simulat, 55(1-3):249– 257, 2001. 19. I. M. Sobo´l. Uniformly distributed sequences with additional uniformity properties. USSR Comput. Math. and Math. Phy., 16:236–242, 1976. 20. S. Tezuka. Uniform Random Numbers, Theory and Practice. Kluwer Academic Publishers, IBM Japan, 1995. 21. S. Tezuka and H. Faure. I-binomial scrambling of digital nets and sequences. Journal of Complexity, 19(6):744–757, 2003. 22. S. Tezuka and T. Tokuyama. A note on polynomial arithmetic analogue of Halton sequences. ACM Trans. on Modelling and Computer Simulation, 4:279–284, 1994. 23. X. Wang and K.T. Fang. The effective dimension and quasi-monte carlo. Journal of Complexity, 19(2):101–124, 2003.

Complexity of Monte Carlo Algorithms for a Class of Integral Equations Ivan Dimov1,2 and Rayna Georgieva2 1

2

Centre for Advanced Computing and Emerging Technologies School of Systems Engineering, The University of Reading Whiteknights, PO Box 225, Reading, RG6 6AY, UK [email protected] Institute for Parallel Processing, Bulgarian Academy of Sciences Acad. G. Bonchev 25 A, 1113 Sofia, Bulgaria [email protected], [email protected]

Abstract. In this work we study the computational complexity of a class of grid Monte Carlo algorithms for integral equations. The idea of the algorithms consists in an approximation of the integral equation by a system of algebraic equations. Then the Markov chain iterative Monte Carlo is used to solve the system. The assumption here is that the corresponding Neumann series for the iterative matrix does not necessarily converge or converges slowly. We use a special technique to accelerate the convergence. An estimate of the computational complexity of Monte Carlo algorithm using the considered approach is obtained. The estimate of the complexity is compared with the corresponding quantity for the complexity of the grid-free Monte Carlo algorithm. The conditions under which the class of grid Monte Carlo algorithms is more efficient are given.

1 Introduction Monte Carlo method (MCM) is established as a powerful numerical approach for investigation of various problems (evaluation of integrals, solving integral equations, boundary value problems) with the progress in modern computational systems. In this paper, a special class of integral equations obtained from boundary value problems for elliptic partial differential equations is considered. Many problems in the area of environmental modeling, radiation transport, semiconductor modeling, and remote geological sensing are described in terms of integral equations that appear as integral representation of elliptic boundary value problems. Especially, the approach presented in this paper is of great importance for studying environmental security. There are different Monte Carlo algorithms (MCAs) for solving integral equations. A class of grid Monte Carlo algorithms (GMCAs) falls into the range of the present research. The question: Which Monte Carlo algorithm is preferable to solve a given problem? is of great importance in computational mathematics. That is why the purpose of this paper is to study 

Partially supported by NATO grant ”Monte Carlo Sensitivity Studies of Environmental Security” (PDD(TC)-(ESP.EAP.CLG 982641), BIS-21++ project funded by the European Commission (INCO-CT-2005-016639) as well as by the Ministry of Education and Science of Bulgaria, under grant I-1405/2004.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 731–738, 2007. c Springer-Verlag Berlin Heidelberg 2007 

732

I. Dimov and R. Georgieva

the conditions under which the class of algorithms under consideration solves a given problem more efficiently with the same accuracy than other MCAs or is the only applicable. Here we compare the efficiency of grid MCAs with known grid-free Monte Carlo algorithms (GFMCAs), called spherical process (see [5]). A measure of the efficiency of an algorithm is its complexity (computational cost), which is defined as the mean number of operations (arithmetic and logical) necessary for computing the value of the random variable for a transition in a Markov chain.

2 Formulation of the Problem We consider a special class of Fredholm integral equations that normally appears as an integral representation of some boundary-value problems for differential equations. As an example which has many interesting applications we consider an elliptic boundary value problem:   Mu = −φ(x), x ∈ Ω ⊂ Rd ,  (1)  u = ω(x) x ∈ ∂Ω,  d   ∂2 ∂ where M = + vi (x) (i) + w (x), x = (x(1) , x(2) , . . . , x(d) ). (i) 2 ∂x ∂x i=1 Definition 1. The domain Ω belongs to the class A(n,ν) if it is possible to associate a hypersphere Γ (x) with each point x ∈ ∂Ω, so that the boundary ∂Ω can be presented as a function z (d) = ζ(z (1) , . . . , z (d−1) ) in the neighborhood of x for which ζ (n) (z (1) , z (2) , . . . , z (d−1) ) ∈ C(0,ν) , i.e. |ζ (n) (z1 ) − ζ (n) (z2 )| ≤ const |z1 − z2 |ν , (1) (2) (d−1) (1) (2) (d−1) where the vectors z1 = (z1 , z1 , . . . , z1 ) and z2 = (z2 , z2 , . . . , z2 ) are (d − 1)-dimensional vectors and ν ∈ (0, 1]. ¯ ∈ A(1,ν) the coefficients of the operator M satisfy the If in the bounded domain Ω (0,ν) ¯ ¯ ω ∈ C(∂Ω), conditions vj , w(x) ∈ C (Ω), w(x) ≤ 0 and φ ∈ C(0,ν) (Ω)∩C(Ω), 2 ¯ the problem (1) has an unique solution u(x) ∈ C (Ω) ∩ C(Ω). The conditions for uniqueness of the solution can be found in [9]. An integral representation of the solution is obtained using the Green’s function for standard domains B(x), x ∈ Ω (for example - sphere, ball, ellipsoid), lying inside the domain Ω taking into account that B(x) satisfies required conditions (see [9]). Therefore, the initial problem for solving an elliptic differential task (1) is transformed into the following Fredholm integral equation of the second kind with a spectral parameter λ (K is an integral operator, K : Lp −→ Lp ):  k(x, t) u(t) dt + f (x), x ∈ Ω (or u = λ Ku + f ), (2) u(x) = λ B(x)

where k(x, t) and f (x) are obtained using Levy’s function and satisfy: k(x, t) ∈ Lxp (Ω)



Ltq (B(x)),

f (x) ∈ Lp (Ω),

p, q ∈ Z, p, q ≥ 0,

The unknown function is denoted by u(x) ∈ Lp (Ω), x ∈ Ω, t ∈ B(x).

1 1 + = 1. p q

Complexity of Monte Carlo Algorithms for a Class of Integral Equations

733

We are interested in Monte Carlo method for evaluation with a priori given error ε of linear functionals of the solution of the integral equation (2) of the following type:  J(u) = ϕ(x) u(x) dx = (ϕ, u) for λ = λ∗ . (3) Ω

It is assumed that ϕ(x) ∈ Lq (Ω), q ≥ 0, q ∈ Z.

3 A Class of Grid Monte Carlo Algorithms for Integral Equations The investigated grid Monte Carlo approach for approximate evaluating of the linear functional (3) is based on the approximation of the given integral equation (2) by a system of linear algebraic equations (SLAE). This transformation represents the initial step of the considered class of grid MCAs. It is obtained using some approximate cubature rule (cubature method, Nystrom method, [1,7]). The next step is to apply the resolvent MCA [2,3] for solving linear systems of equations. 3.1 Cubature Method m Let the set {Aj }m j=1 be the weights and the points {xj }j=1 ∈ Ω be the nodes of the chosen cubature formula. Thus, the initial problem for evaluating of (ϕ, u) is transformed into the problem for evaluating of the bilinear form (h, y) of the solution y of the obtained SLAE:

y = λ L y + b,

L = {lij } ∈ Rm×m ,

y = {yi }, b = {bi }, h = {hi } ∈ Rm×1 (4)

with the vector h ∈ Rm×1 . The following notation is used: lij = Aj k(xi , xj ),

yi = u(xi ),

bi = f (xi ), hi = Ai ϕ(xi ), i, j = 1, . . . , m.

The error in the approximation on the first step is equal to: λ

m 

hi ρ1 (xi ; m, k, u) + ρ2 (m, ϕ, u),

i=1

where ρ1 (xj ; m, k, u) and ρ2 (m, ϕ, u) are the approximation errors for the integral in equation (2) at the node xi and linear functional (3), respectively. Some estimations for the obtained errors ρ1 , ρ2 from the approximation with some quadrature formula in the case when Ω is an interval [a, b] ⊂ R are given below. The errors depend on derivatives of some order of the functions k(x, t) u(t) and ϕ(x) u(x). Estimations for these quantities obtained after differentiation of the integral equation and using Leibnitz’s rule are given in the works of Kantorovich and Krylov [7]. Analogous estimations are given below: j

 

 j ∂j [ k(xi , t) u(t)] ≤ j l ∂t l=0

|(ϕu)(j) | ≤

j  l=0

(j−l)

F (l) Kt

+ |λ|(b − a)

 

j l

F (l) Φ(j−l) + |λ|(b − a)

j  l=0

j  l=0

 

j l

(j−l)

Kx(l) Kt

 

j l

Kx(l) Φ(j−l) U (0) ,

U (0) ,

734

where

I. Dimov and R. Georgieva (j) Kx

 j   ∂ k(x, t)    = max  ∂xj  t∈B(x)

,

(j) Kt

x=xi

 j   ∂ k(xi , t)   , = max   ∂tj t∈B(x)

F (j) = max |f (j) (x)|, U (j) = max |u(j) (x)|, Φ(j) = max |ϕ(j) (x)|. x∈Ω

x∈Ω

x∈Ω

The quantity U (0) , which represents the maximum of the solution u in the interval Ω = [a, b], is unknown. We estimate it using the original integral equation, where the maximum of the solution in the right-hand side is estimated by the maximum of the initial approximation: U (0) ≤ (1 + |λ|(b − a)K (0) ) F (0) . 3.2 Resolvent Monte Carlo Method for SLAE Iterative Monte Carlo algorithm is used for evaluating a bilinear form (h, y) of the solution of the SLAE (4), obtained after the discretization of the given integral equation (2). Consider the discrete Markov chain T : k0 −→ k1 −→ . . . −→ ki with m states 1, 2, . . . , m. The chain is constructed according to initial probability π = {πi }m i=1 and transition probability P = {pij }m i,j=1 . The mentioned probabilities have to be normilized and tolerant to the vector h and the matrix L respectively. It is known (see, for example, [5,11]) that the mathematical expectation of the random variable, defined by the formula ∞ lk k hk  θ[h] = 0 Wj bkj , where W0 = 1, Wj = Wj−1 j−1 j , πk0 j=0 pkj−1 kj is equal to the unknown bilinear form, i.e. Eθ[h] = (h, y). Iterative MCM is characterized by two types of errors: – systematic error ri , i ≥ 1 (obtained from truncation of Markov chain) which depends on the number of iterations i of the used iterative process: |ri | ≤ αi+1 b 2 /(1 − α),

α = |λ| ||L||2 ,

b = {bj }m j=1 ,

bj = f (xj )

– statistical error rN , which depends on the number of samples N of Markov chain: rN = cβ σ 2 (θ[h])N −1/2 , 0 < β < 1, β ∈ R. The constant cβ (and therefore also the complexity estimates of algorithms) depends on the confidence level β. Probable error is often used, which corresponds to a 50% confidence level. The problem to achieve a good balance between the systematic and statistical error has a great practical importance.

4 Estimate of the Computational Complexity In this section, computational complexity of two approaches for solving integral equations is analysed. These approaches are related to iterative Monte Carlo methods and they have similar order of computational cost. That is why, our main goal is to compare the coefficients of leading terms in the expressions for complexity of algorithms under consideration. The values of these coefficients (depending on the number of operations necessary for every move in Markov chain) allow to determine the conditions when the considered grid MCA has higher computational efficiency than the mentioned grid-free MCA.

Complexity of Monte Carlo Algorithms for a Class of Integral Equations

735

4.1 A Grid Monte Carlo Algorithm To estimate the performance of MCAs, one has to consider the mathematical expectation ET (A) of the time required for solving the problem using an algorithm A (see [4]). Let lA and lL be the number of suboperations of the arithmetic and logical operations, respectively. The time required to complete a suboperation is denoted by τ (for real computers this is usually the clock period). Cubature Algorithm. The computational complexity is estimated for a given cubature rule:  T (CA) > τ cs (pk + 1)ε−s + c−s/2 (pf + pϕ + pnode)ε−s/2 + pcoef lA , where the constant c depends on the following quantities (r)

c = c (λ, Kx(r) , Kt , F (r) , Φ(r) ),

r = 1, . . . , ADA + 1,

s = s(ADA)

(5)

(ADA is the algebraic degree of accuracy of the chosen cubature formula). The number of arithmetic operations required to compute one value of the functions k(x, t), f (x) and ϕ(x) and one node (coefficient) is denoted by pk , pf and pϕ , respectively and by pnode (pcoef ). The degree s and the constants pnode and pcoef depend on the applied formula. For instance:

1 for rectangular and Trapezoidal rule; s= (6) 1/2 for Simpson’s rule. Resolvent Monte Carlo Algorithm. Firstly, the case when the corresponding Neumann series converges (the supposition for slow convergence is allowed) is considered. The following number of operations is necessary for one random walk: – generation of one random number : kA arithmetic and kL logical operations; – modeling the initial probability π to determine the initial or next point in the Markov chain: μA arithmetic and μL logical operations (E μA + 1 = E μL = μ, 1 ≤ μ ≤ m − 1); – computing one value of the random variable: 4 arithmetic operations. To calculate in advance the initial π and transition P probabilities (a vector and a square matrix, respectively), it is necessary a number of arithmetic operations, proportional to the matrix dimension m: 2m(1 + m). To ensure a statistical error ε, it is necessary to perform i transitions in the Markov process, where i is chosen from the inequality i > ln−1 α (ln ε + ln (1 − α) − ln b 2 ) − 1 (assuming b 2 > ε (1 − α)), where α = |λ| L 2 and the initial approximation is chosen to be the right-hand side b. To achieve a probable error ε, it is necessary to do N samples depending on the inequality N > c0.5 σ 2 (θ) ε−2 , c0.5 ≈ 0.6745, where θ is the random variable, whose mathematical expectation coincides with the desired linear functional (3).

736

I. Dimov and R. Georgieva

Therefore, the following estimate holds for the mathematical expectation of the time required to obtain an approximation with accuracy ε using the considered grid MCA: E T (RMCA) > τ [(kA + μ + 3) lA + (kL + μ) lL ] + 2τ m(m + 1)lA ,

[cβ σ(ξjR [h])]2 (ln3 ε + a) ε2 ln3 α

√ where a = ln(1 − α) − ln b 2 , m > cs ε−s/2 (the constants are given by (5) and (6)), and ξjR is the unbiased estimate of the j-th iteration of the matrix L, obtained using the resolvent MCA. Consider the case when the corresponding Neumann series does not converge. The convergence of the Monte Carlo algorithm for solving the SLAE (4) can be ensured (or accelerated) by application of an analytical continuation of the Neumann series by substituting of the spectral parameter λ (mapping) (see [2,6,7,8,10]). The main advantage of this approach for acceleration of convergence of an iterative process is its inessential influence over the computational complexity of the algorithm. The computational complexity on every walk is increased only with one arithmetic operation required for multiplication by the coefficients gj , j ≥ 0, that ensures convergence (on the supposition that these coefficients are calculated with a high precision in advance). To obtain the computational complexity of the modified algorithm, it is necessary to estimate the ∞ hk  variation of the new random variable: θ[h] = 0 gj Wj bkj . πk0 j=0 We will use the following statement for a class of mappings proved in [10]: The conformal mapping λ = ψ(η) = a1 η + a2 η + . . . has only simple poles on its boundary ¯ |η∗ |/(1 − |η∗ |) < 1, then the of convergence |η| = 1. If V ar ξkR ≤ σ 2 and q = a complexity estimate of the algorithm has an order O(|ln ε|4 /ε2 ), where a ¯ is such a ¯, i = 1, 2, . . ., λ∗ is the value of the spectral parameter in the constant that |ai | ≤ a integral equation (2) (respectively SLAE (4)) and η∗ = ψ −1 (λ∗ ). In general, a computational estimate of this class of grid MCAs can be obtained if the behavior of gj and V ar ξjR is known. 4.2 A Grid-Free Monte Carlo Algorithm The computational complexity of the grid MCA under consideration is compared with the computational complexity of a grid-free Monte Carlo approach. This approach is based on the use of a local integral representation (assuming that such a representation exists, [9,12]) of the solution of an elliptic boundary value problem. Existence of this representation allows to construct a Monte Carlo algorithm, called spherical process (in the simplest case) for computing of the corresponding linear functional. As a first step of this algorithm an -strip ∂Ω of the boundary ∂Ω is chosen (on the supposition that the solution is known on the boundary) to ensure the convergence of the constructed iterative process. The following number of operation is necessary for one random walk: – generation of n (this number depends on initial probability π) random numbers to determine the initial point in the Markov chain: n(kA + kL ) operations (kA and kL are the arithmetic and logical operations necessary for the generation of one random

Complexity of Monte Carlo Algorithms for a Class of Integral Equations

– – – – –

737

number) or modeling of an isotropic vector that needs of the order of R∗n(kA +kL ) operations (the constant R depends on the efficiency of the modeling method and transition probability); calculating the coordinates of the initial or next point: pnext (depends on the modeling method and the dimension d of the domain B(x)); calculating one value of functions: pf ; pπ , pϕ or pk , pP ; calculating one sample of the random variable: 4 arithmetic operations; calculating the distance from the current point to the boundary ∂Ω: γA arithmetic and γL logical operations (depends on the dimension d of the domain Ω); verification if the current point belongs to the chosen δ-strip ∂Ωδ .

The following logarithmic estimate for the average number Ei of spheres on a single trajectory holds for a wide class of boundaries [5]: E i ≤ const |ln δ|,

const > 0,

(7)

where const depends on the boundary ∂Ω. Calculating the linear functional with a preliminary given accuracy ε and attainment of a good balance between the statistical and the systematic error is a problem of interest to us. Let us to restrict our investigation of the statistical error to the domain Ω ≡ [a, b]. To ensure a statistical error ε, it is necessary to do i transitions in the Markov process, where i is chosen from the inequality: i > ln−1 α (ln ε + ln (1 − α) − ln F (0) ) − 1 where α = |λ| VB(x) K,

(assuming F (0) > ε (1 − α)),

K = max |k(x, t)| and the initial approximation is chosen to x,t

be the right-hand side f (x). On the other hand, the estimate (7) depending on the chosen -strip of the boundary is done. Then, an expression for δ according to the number of transition i is obtained from these two estimates: δ ≈ e −i/const . Therefore, the following estimate holds for the mathematical expectation of the time required to obtain an approximation with accuracy ε using the considered grid-free MCA: E T (GF MCA) > τ [(n kA + pnext + pf + pπ + pϕ + γA + 4) lA +(n kL + γL + 1) lL + ((R n kA + pnext + pf + pk + pP + 4 + γA ) lA +(R n kL + γL + 1) lL ) ×

(ln ε + ln (1 − α) − ln3 F (0) [cβ σ(ξjS [h])]2 ] . ε2 ln3 α

Obtained expressions for coefficients in computational complexity for MCAs under consideration allow us to define some conditions when the GMCA is preferable to the GFMCA: – the functions that define the integral equation (2) (k(x, t), f (x), ϕ(x)) have comparatively small maximum norm in the corresponding domain and their values can be calculated with a low complexity

738

I. Dimov and R. Georgieva

– the initial and transition probability are complicated for modeling (acceptancerejection method) – large dimension of the integration domain It has to be noted the fact that the grid MCAs under consideration are admissible only for integral equations with smooth functions, but some techniques of avoiding singularities of this kind exist (see [1]).

5 Concluding Discussion In this paper we deal with performance analysis of a class of Markov chain grid Monte Carlo algorithms for solving Fredholm integral equations of second kind. We compare this class of algorithms with a class of grid-free algorithms. Grid-free Monte Carlo uses the so-called spherical process for computing of the corresponding linear functional. Obviously, the grid approach assumes higher regularity of the input data since it includes an approximation procedure described in Section 4.1. The (grid-free) approach does not need additional approximation procedure and directly produces a bias approximation to the solution. But the grid-free algorithm is more complicated and its implementation needs more routine operations (like checking the distance from a given point to the boundary) that decrease the efficiency of the algorithm. Analyzing the regularity of the problem one may chose either grid or grid-free algorithm is preferable. Especially, if the input data has higher regularity (k(x, t), f (x), ϕ(x) have comparatively small maximum norm) than the grid algorithm should be preferred.

References 1. Bahvalov, N.S., Zhidkov, N.P., Kobelkov, G.M.: Numerical Methods. Nauka, Moscow (1987) 2. Dimov, I.T., Alexandrov, V.N., Karaivanova, A.N.: Parallel Resolvent Monte Carlo Algorithms for Linear Algebra Problems. Mathematics and Computers in Simulation 55 (2001) 25–35 3. Dimov, I.T., Karaivanova, A.N.: Iterative Monte Carlo Algorithms for Linear Algebra Problems. In: Vulkov, L., Wasniewski, J., Yalamov, P. (eds.): Lecture Notes in Computer Science, Vol. 1196. Springer-Verlag, Berlin (1996) 150–160 4. Dimov, I.T., Tonev, O.I.: Monte Carlo Algorithms: Performance Analysis for Some Computer Architectures. J. Comput. Appl. Math. 48 (1993) 253–277 5. Ermakov, S.M., Mikhailov, G.A.: Statistical Modeling. Nauka, Moscow (1982) 6. Kantorovich, L.V., Akilov, G.P.: Functional Analysis in Normed Spaces. Pergamon Press, Oxford (1964) 7. Kantorovich, L.V., Krylov, V.I.: Approximate Methods of Higher Analysis. Physical and Mathematical State Publishing House, Leningrad (1962) 8. Kublanovskaya, V.N.: Application of Analytical Continuation by Substitution of Variables in Numerical Analysis. In: Proceedings of the Steklov Institute of Mathematics (1959) 9. Miranda, C.: Partial Differential Equations of Elliptic Type. Springer-Verlag, Berlin Heidelberg New York (1970) 10. Sabelfeld, K.K.: Monte Carlo Methods in Boundary Value Problems. Springer-Verlag, Berlin Heidelberg New York London (1991) 11. Sobo`l, I.M.: The Monte Carlo Method. The University of Chicago Press, Chicago (1974) 12. Vladimirov, S.V.: Equations of Mathematical Physics. Nauka, Moscow (1976)

Modeling of Carrier Transport in Nanowires T. Gurov, E. Atanassov, M. Nedjalkov, and I. Dimov IPP, Bulgarian Academy of Sciences, Sofia, Bulgaria {gurov,emanouil}@parallel.bas.bg, [email protected] Institute for Microelectronics, TU Vienna, Austria [email protected] Centre for Advanced Computing and Emerging Technologies School of Systems Engineering, The University of Reading Whiteknights, PO Box 225, Reading, RG6 6AY, UK [email protected]

Abstract. We consider a physical model of ultrafast evolution of an initial electron distribution in a quantum wire. The electron evolution is described by a quantum-kinetic equation accounting for the interaction with phonons. A Monte Carlo approach has been developed for solving the equation. The corresponding Monte Carlo algorithm is N P -hard problem concerning the evolution time. To obtain solutions for long evolution times with small stochastic error we combine both variance reduction techniques and distributed computations. Grid technologies are implemented due to the large computational efforts imposed by the quantum character of the model.

1 Introduction The Monte Carlo (MC) methods provide approximate solutions by performing statistical sampling experiments. These methods are based on simulation of random variables whose mathematical expectations are equal to a functional of the problem solution. Many problems in the transport theory and related areas can be described mathematically by a second kind Fredholm integral equation. f = IK(f ) + φ.

(1)

In general the physical quantities of interest are determined by functionals of the type:  g(x)f (x)dx, (2) J(f ) ≡ (g, f ) = G

where the domain G ⊂ IRd and a point x ∈ G is a point in the Euclidean space IRd . The functions f (x) and g(x) belong to a Banach space X and to the adjoint space X ∗ , respectively, and f (x) is the solution of (1). The mathematical concept of the MC approach is based on the iterative expansion of the solution of (1): fs = IK(fs−1 )+φ, s = 1, 2, . . . , where s is the number of iterations. 

Supported by the Bulgarian Ministry of Education and Science, under grant I-1405/2004.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 739–746, 2007. c Springer-Verlag Berlin Heidelberg 2007 

740

T. Gurov et al.

Thus we define a Neumann series fs = φ + IK(φ) + . . . + IK s−1 (φ) + IK s (f0 ), s > 1, where IK s means the s-th iteration of IK. In the case when the corresponding infinite series converges then the sum is an element f from the space X which satisfies the equation (1). The Neumann series, replaced in (2), gives rise to a sum of consecutive terms which are evaluated by the MC method with the help of random estimators. A random variable ξ is said to be a MC estimator for the functional (2) if the mathematical expectation of ξ is equal to J(f ): Eξ = J(f ). Therefore we can define a MC method  (i) P ξ = N1 N −→ J(f ), where ξ (1) , . . . , ξ (N ) are independent values of ξ and i=1 ξ P

−→ means stochastic convergence as N −→ ∞. The rate of convergence is√evaluated  V arξ ≈ by the law of the three sigmas: convergence (see [1]): P |ξ − J(f )| < 3 √ N 0.997, where V ar(ξ) = Eξ 2 − E 2 ξ is the variance. It is seen that, when using a random estimator, the result is obtained with a statistical error [1,2]. As N increases, the statistical error decreases as N −1/2 . Thus, there are two types of errors - systematic (a truncation error) and stochastic (a probability error) [2]. The systematic error depends on the number of iterations of the used iterative method, while the stochastic error is related to the the probabilistic nature of the MC method. The MC method still does not determine the computation algorithm: we must specify the modeling function (sampling rule) Θ = F (β1 , β2 , . . . , ), where β1 , β2 , . . . , are uniformly distributed random numbers in the interval (0, 1). Now both relations the MC method and the sampling rule define a MC algorithm for estimating J(f ). The case when g = δ(x − x0 ) is of special interest, because we are interested in calculating the value of f at x0 , where x0 ∈ G is a fixed point. Every iterative algorithm uses a finite number of iterations s. In practice we define a MC estimator ξs for computing the functional J(fs ) with a statistical error. On the other hand ξs is a biased estimator for the functional J(f ) with statistical and truncation errors. The number of iterations can be a random variable when an ε-criterion is used to truncate the Neumann series or the corresponding Markov chain in the MC algorithm. The stochastic convergence rate is approximated by O(N −1/2 ). In order to accelerate the convergence rate of the MC methods several techniques have been developed. Variance reductions techniques, like antithetic varieties, stratification and importance sampling [1], reduce the variance which is a quantity to measure the probabilistic uncertainly. Parallelism is an another way to accelerate the convergence of a MC computation. If n processors execute n independence of the MC computation using nonoverlaping random sequences, the accumulated result has a variance n times smaller than that of a single copy.

2 Physical Model The early time dynamics of highly non-equilibrium semiconductor electrons is rich of quantum phenomena which are basic components of the physics of the modern micro- and nano-electronics [4]. Confined systems are characterized by small spatial scales where another classical assumption - this for electron-phonon scattering occuring at a well defined position - loses its validity. These scales become relevant for the process of evolution of an initial electron distribution, locally excited or injected into a

Modeling of Carrier Transport in Nanowires

741

semiconductor nanowire. Beyond-Boltzmann transport models for wire electrons have been recently derived [3] in terms of the Wigner function. They appear as inhomogeneous conterparts of Levinson’s and the Barker-Ferry’s equations. The later is a physically more refined model, which accounts for the finite lifetime of the electrons due to the interaction with the phonons. The corresponding physical process can be summarized as follows. An initial nonequilibrium electron distribution is created locally in a nanowire by e.g. an optical excitation. The electrons begin to propagate along the wire, where they are characterized by two variables: the position z and the component of the wave vector is kz . We note that these are Wigner coordinates so that a classical interpretation as ”position and momentum of the electron” is not relevant. A general, time-dependent electric field E(t) can be applied along the nanowire. The field changes the wave vector with the time: t kz (t ) = kz − t eE(τ )/dτ where kz is the wave vector value at the initialization time t. Another source of modifications of the wave vector are dissipative processes of electron-phonon interaction, responsible for the energy relaxation of the electrons. The following considerations simplify the problem thus allowing to focus on the numerical aspects: (i) We regard the Levinson type of equation which bears the basic numerical properties of the physical model:     t kz ∂ + ∇z fw (z, kz , t) = dk dt (3) ∂t m 0 S(kz , kz , q⊥ , t, t )fw (z(t ), kz  (t ), t ) −S(kz , kz , q⊥ , t, t )fw (z(t ), kz (t ), t )} ,  Q  where G d3 k = dq⊥ −Q2 2 dkz and the domain G is specified in the next section.  The spatial part z(t ) of  the trajectory is initialized by the value z at time t: z(t ) =   t qz  dτ ; qz = kz − kz . The scattering function S is z−m t kz (τ ) − 2 2V S(kz , kz , q⊥ , t, t ) = |Γ (q⊥ )F (q⊥ , kz − kz )|2 (2π)3 ⎞⎤ ⎡ ⎛ t      ) dτ 1 ( (k (τ )) − (k (τ )) ± ω 1 z q z ⎠⎦ . × ⎣ n(q ) + ± cos ⎝ 2 2  t

The electron-phonon coupling F is chosen for the case of Fr¨ohlich polar optical interaction: F (q⊥ , kz − kz )    12    2πe2 ωq 1 1 1  − ; Γ (q⊥ ) = dr⊥ |Ψ (r⊥ )|2 eiq⊥ ·r⊥ , =− V ε ∞ ε s (q )2 where (ε∞ ) and (εs ) are the optical and static dielectric constants. Γ (q⊥ ) is the Fourier transform of the square of the ground state wave function. Thus we come to the second simplifying assumption: (ii) Very low temperatures are assumed, so that the electrons reside in the ground state Ψ (r⊥ ) in the plane normal to the wire. The phonon distribution is described by the Bose function, n(q ) = (e

ω  q KT

− 1)−1 with K the Boltzmann

742

T. Gurov et al.

constant and T is the temperature of the crystal. ωq is the phonon energy which generally depends on q = (q⊥ , qz ), and ε(kz ) = (2 kz2 )/2m is the electron energy. (iii) Finally we consider the case of no electric field applied along the wire. Next we need to transform the transport equation into the corresponding integral form (1): kz t, kz ) (4) fw (z, kz , t) = fw0 (z − m  t  t  + dt dt d3 k {K1 (kz , kz , q⊥ , t , t )fw (z + z0 (kz , qz , t, t , t ), kz , t )} 0

 +

0

t

dt

t t



t

dt



G

G

d3 k {K2 (kz , kz , q⊥ , t , t )fw (z + z0 (kz , qz , t, t , t ), kz , t )},

 z where z0 (kz , qz , t, t , t ) = − k m (t − t ) +

qz  2m (t

− t ),

K1 (kz , kz , q⊥ , t , t ) = S(kz , kz , q⊥ , t , t ) = −K2 (kz .kz , q⊥ , t , t ). We note that the evolution problem becomes inhomogeneous due to the spatial dependence of the initial condition fw0 . The shape of the wire modifies the electron-phonon coupling via the ground state in the normal plane. If the wire cross section is chosen to be a square with side a, the corresponding factor Γ becomes: |Γ (qx )Γ (qy )|2 =  2 2  4π 2 4π 2  2 2  4sin (aqx /2) q a (q a)2 −4π2 4sin2 (aqy /2). |Γ (q⊥ )| =  a((q  a)2 −4π 2 ) qx x ) y ( y We note that the Neumann series of such type integral equations as (4) converges [5] and it can be evaluated by a MC estimator [2].

3 The Monte Carlo Method The values of the physical quantities are expressed by the following general functional of the solution of (4):  T  g(z, kz , t)fw (z, kz , t)dzdkz dt, (5) Jg (f ) ≡ (g, f ) = 0

D

by a MC method. Here we specify that the phase space point (z, kz ) belongs to a rectangular domain D = (−Q1 , Q1 ) × (−Q2 , Q2 ), and t ∈ (0, T ). The function g(z, kz , t) depends on the quantity of interest. In particular the Wigner function, the wave vector and density distributions, and the energy density are given by: (i) gW (z, kz , t) = δ(z − z0 )δ(kz − kz,0 )δ(t − t0 ), (ii) gn (z, kz , t) = δ(kz − kz,0 )δ(t − t0 ), (iii) gk (z, kz , t) = δ(z − z0 )δ(t − t0 ), (iv) g(z, kz , t) = (kz )δ(z − z0 )δ(t − t0 )/gn (z, kz , t), We construct a biased MC estimator for evaluating the functional (5) using backward time evolution of the numerical trajectories in the following way:

Modeling of Carrier Transport in Nanowires

743

 α  g(z, kz , t) g(z, kz , t)  α W0 fw,0 (., kz , 0) + Wj fw,0 ., kz,j , tj , pin (z, kz , t) pin (z, kz , t) j=1 s

ξs [Jg (f )] = where

 α  , tj fw,0 ., kz,j    fw,0 z + z0 (kz,j−1 , kz,j−1 − kz,j , tj−1 , tj , tj ), kz,j , tj , if α = 1, = fw,0 z + z0 (kz,j−1 , kz,j − kz,j−1 , tj−1 , tj , tj ), kz,j−1 , tj , if α = 2, α Wjα = Wj−1

Kα (kz j−1 , kj , tj , tj ) , W0α = W0 = 1, α = 1, 2, j = 1, . . . , s . pα ptr (kj−1 , kj , tj , tj )

The probabilities pα , (α = 1, 2) are chosen to be proportional to the absolute value of the kernels in (4). pin (z, kz , t) and ptr (k, k , t , t ), which are an initial density and a transition density, are chosen to be tolerant1 to the function g(z, kz , t) and the kernels, respectively. The first point (z, kz 0 , t0 ) in the Markov chain is chosen using the initial density, where kz 0 is the third coordinate of the wave vector k0 . Next points (kz j , tj , tj ) ∈ (−Q2 , Q2 ) × (tj , tj−1 ) × (0, tj−1 ) of the Markov chain: (kz 0 , t0 ) → (kz 1 , t1 , t1 ) → . . . → (kz j , tj , tj ) → . . . → (kz s , , ts , ts ), j = 1, 2, . . . , s do not depend on the position z of the electrons. They are sampled using the transition density ptr (k, k , t , t ). The z coordinate of the generated wave vectors is taken for the Markov chain, while the normal coordinates give values for q⊥ . As the integral on q⊥ can be assigned to the kernel these values do not form a Markov chain but are independent on the consecutive steps. The time tj conditionally depends on the selected time tj . The Markov chain terminates in time ts < ε1 , where ε1 is related to the truncation error introduced in first section. Now the functional (5) can be evaluated by N independent N P samples of the obtained estimator, namely, N1 i=1 (ξs [Jg (f )])i −→ Jg (fs ) ≈ Jg (f ), P

where −→ means stochastic convergence as N → ∞; fs is the iterative solution obtained by the Neumann series of (4), and s is the number of iterations. To define a MC algorithm we have to specify the initial and transition densities, as well the modeling function (or sampling rule). The modeling function describes the rule needed to calculate the states of the Markov chain by using uniformly distributed random numbers in the interval (0, 1). The transition density is chosen: ptr (k, k , t , t ) = p(k /k)p(t, t , t ), 1   2 where p(t, t , t ) = p(t, t )p(t /t ) = 1t (t−t ], and c1 is  ) p(k /k) = c1 /(k − k) the normalized constant. Thus, by simulating the Markov chain under consideration, the desired physical quantities (values of the Wigner function, the wave vector relaxation, the electron and energy density) can be evaluated simultaneously.

4 Grid Implementation and Numerical Results The computational complexity of an MC algorithm can be measured by the quantity CC = N × τ × M (sε1 ). The number of the random walks, N , and the average number 1

r(x) is tolerant of g(x) if r(x) > 0 when g(x) = 0 and r(x) ≥ 0 when g(x) = 0.

744

T. Gurov et al. 400 100fs 350

150fs

kz distribution [a.u.]

300

175fs

250 200 150 100 50 0 20

30

40

50

60

−2

k [10 /nm] z

Fig. 1. Wave vector relaxation of the highly non-equilibrium initial condition. The quantum solution shows broadening of the replicas. Electrons appear in the classically forbidden region above the initial condition.

of transitions in the Markov chain, M (sε1 ), are related to the stochastic and systematic errors [5]. The mean time for modeling one transition, τ , depends on the complexity of the transition density functions and on the sampling rule, as well as on the choice of the random number generator (rng). It is proved [5,6] that the stochastic error has order O(exp (c2 t) /N 1/2 ), where t is the evolution time and c2 is a constant depending on the kernels of the quantum kinetic equation under consideration. This estimate shows that when t is fixed and N → ∞ the error decreases, but for large t the factor exp (c2 t) looks ominous. Therefore, the algorithm solves an N P -hard problem concerning the evolution time. To solve this problem for long evolution times with small stochastic error we have to combine both MC variance reduction techniques and distributed or parallel computations. By using the Grid environment provided by the EGEE-II project middleware2 [8] we were able to reduce the computing time of the MC algorithm under consideration. The simulations are parallelized on the existing Grid infrastructure by splitting the underlying random number sequences. The numerical results discussed in Fig. 1 are obtained for zero temperature and GaAs material parameters: the electron effective mass is 0.063, the optimal phonon energy is 36meV , the static and optical dielectric constants are εs = 10.92 and ε∞ = 12.9. The initial condition is a product of two Gaussian distributions of the energy and space. The kz distribution corresponds to a generating laser pulse with an excess energy of about 150meV .This distribution was estimated for 130 points in the interval (0, 66), where Q2 = 66 × 107 m−1 . The z distribution is centered around zero (see Figures 2-3) and it is estimated for 400 points in the interval (−Q1 , Q1 ), where Q1 = 400 × 109 m−1 . The side a of the wire is chosen to be 10 nanometers. The SPRNG library has been used to produce independent and 2

The Enabling Grids for E-sciencE-II (EGEE-II) project is funded by the EC under grand INFSO-RI-031688. For more information see http://www.eu-egee.org/.

Modeling of Carrier Transport in Nanowires

745

200fs

1

10

ballistic density 0

density n [a.u]

10

−1

10

−2

10

−3

10 −300

−200

−100

0

100

200

300

z [nm]

Fig. 2. Electron density along the wire after 200 fs. The ballistic curve outlines the largest distance which can be reached by classical electrons. The quantum solution reaches larger distances due to the electrons scattered in the classically forbidden energy region. 300 ballistic density

mean energy [meV]

250 200 150 100 50 0 −400

−200

0 z [nm]

200

400

Fig. 3. Energy density after 175fs evolution time. A comparison with the ballistic curve shows that the mean kinetic energy per particle is lower in the central region. On contrary hot electrons reside in the regions apart from the center. These electrons are faster than the ballistic ones and thus cover larger distances during the evolution.

non-overlapping random sequences [9]. Successful tests of the algorithm were performed at the Bulgarian SEE-GRID3 sites. The MPI implementation was MPICH 1.2.6, and the execution is controlled from the Computing Element via the Torque batch 3

South Eastern European GRid-enabled eInfrastructure Development-2 (SEE-GRID-2) project is funded by the EC under grand FP6-RI-031775. For more information see http://www.seegrid.eu/.

746

T. Gurov et al.

Table 1. The CPU time (seconds) for all points (in which the physical quantities are estimated), the speed-up, and the parallel efficiency. The number of random walks is N = 100000. The evolution time is 100 f s. Number of CPUs CPU Time (s) Speed-up 2 9790 4 4896 1.9996 6 3265 2.9985

Parallel Efficiency 0.9998 0.9995

system. The timing results for evolution time t=100 femtoseconds are shown in Table 1. The parallel efficiency is close to 100%.

5 Conclusion A quantum-kinetic model for the evolution of an initial electron distribution in a quantum wire has been introduced in terms of the electron Wigner function. The physical quantities, expressed as functionals of the Wigner function are evaluated within a stochastic approach. The developed MC method is characterized by the typical for quantum algorithms computational demands. The stochastic variance grows exponentially with the evolution time. The importance sampling technique is used to reduce the variance of the MC method. The suggested MC algorithm evaluates simultaneously the desired physical quantities using simulation of Markov chain under consideration. Grid technologies are implemented to reduce computational efforts.

References 1. M.A. Kalos, P.A. Whitlock, Monte Carlo methods, Wiley Interscience, New York (1986). 2. G.A. Mikhailov, New MC Methods with Estimating Derivatives, Utrecht, The Netherlands (1995). 3. M. Nedjalkov et al., ”Wigner transport models of the electron-phonon kinetics in quantum wires”, Physical Review B, vol. 74, pp. 035311-1–035311-18, (2006). 4. T.C. Schmidt and K. Moehring, “Stochastic Path-Integral Simulation of Quantum Scattering”, Physical Review A, vol. 48, no. 5, pp. R3418–R3420, (1993). 5. T.V. Gurov, P.A. Whitlock, ”An Efficient Backward Monte Carlo Estimator for Solving of a Quantum Kinetic Equation with Memory Kernel”, Mathematics and Computers in Simulation (60), pp. 85-105, (2002). 6. T.V. Gurov et al., ”Femtosecond Relaxation of Hot Electrons by Phonon Emission in Presence of Electric Field”, Physica B (314), pp. 301-304, (2002). 7. T.V. Gurov, I.T. Dimov, ”A Parallel Monte Carlo Method For Electron Quantum Kinetic Equation”, Lect. Notes in Comp. Sci. (2907), Springer-Verlang, pp.151-161, (2004). 8. EGEE Grid Middleware, http://lcg.web.cern.ch/LCG/Sites/releases.html. 9. Scalable Parallel Random Number Generators Library for Parallel Monte Carlo Computations, SPRNG 1.0 and SPRNG 2.0 – http://sprng.cs.fsu.edu .

Monte Carlo Numerical Treatment of Large Linear Algebra Problems Ivan Dimov, Vassil Alexandrov, Rumyana Papancheva, and Christian Weihrauch Centre for Advanced Computing and Emerging Technologies School of Systems Engineering, The University of Reading Whiteknights, PO Box 225, Reading, RG6 6AY, UK {i.t.dimov, v.n.alexandrov, c.weihrauch}@reading.ac.uk Institute for Parallel Processing, Bulgarian Academy of Sciences Acad. G. Bonchev 25 A, 1113 Sofia, Bulgaria [email protected], [email protected]

Abstract. In this paper we deal with performance analysis of Monte Carlo algorithm for large linear algebra problems. We consider applicability and efficiency of the Markov chain Monte Carlo for large problems, i.e., problems involving matrices with a number of non-zero elements ranging between one million and one billion. We are concentrating on analysis of the almost Optimal Monte Carlo (MAO) algorithm for evaluating bilinear forms of matrix powers since they form the so-called Krylov subspaces. Results are presented comparing the performance of the Robust and Non-robust Monte Carlo algorithms. The algorithms are tested on large dense matrices as well as on large unstructured sparse matrices. Keywords: Monte Carlo algorithms, large-scale problems, matrix computations, performance analysis, iterative process.

1

Introduction

Under large we consider problems involving dense or general sparse matrices with a number of non-zero elements ranging between one million and one billion. It is known that Monte Carlo methods give statistical estimates for bilinear forms of the solution of systems of linear algebraic equations (SLAE) by performing random sampling of a certain random variable, whose mathematical expectation is the desired solution [8]. The problem of variance estimation, in the optimal case, has been considered for extremal eigenvalues [7,9]. In [5,6] we analyse the errors of iterative Monte Carlo for computing bilinear forms of matrix powers. If one is interested to apply Markov chain Monte Carlo for large 

Partially supported by the Bulgarian Ministry of Education and Science, under grant I-1405/2004. The authors would like to acknowledge the support of the European Commission’s Research Infrastructures activity of the Structuring the European Research Area programme, contract number RII3-CT-2003-506079 (HPC-Europa).

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 747–754, 2007. c Springer-Verlag Berlin Heidelberg 2007 

748

I. Dimov et al.

problems, then the applicability and robustness of the algorithm should be studied. The run of large-scale linear algebra problems on parallel computational systems introduce many additional difficulties connected with data parallelization, distribution of parallel subtasks and parallel random number generators. But at the same time one may expect that the influence of unbalancing of matrices is a lot bigger for smaller matrices (of size 100, or so) (see [5]) and with the increasing the matrix size robustness is also increasing. It is reasonable to consider large matrices, and particularly large unstructured sparse matrices since such matrices appear in many important real-live computational problems. We are interested in the bilinear form of matrix powers since it is a basic subtask for many linear algebra problems: (v, Ak h).

(1)

If x is the solution of a SLAE Bx = b, then  k   i (v, x) = v, Ah , i=0

where the Jacobi Over-relaxation Iterative Method has been used to transform the SLAE into the problem x = Ax + h. For an arbitrary large natural number k the Rayleigh quotient can be used to obtain an approximation for λ1 , the dominant eigenvalue, of a matrix A: λ1 ≈

(v, Ak h) . (v, Ak−1 h)

In the latter case we should restrict our consideration to real symmetric matrices in order to deal with real eigenvalues. Thus it is clear that having an efficient way of calculating (1) is important. This is especially important in cases where we are dealing with large matrices.

2

Markov Chain Monte Carlo

The algorithm we use in our runs is the so-called Almost Optimal Monte Carlo (MAO) algorithm studied in [1,3,4]. We consider a Markov chain T = α0 → α1 → α2 → . . . → αk → . . . with n states. The random trajectory (chain) Tk of length k starting in the state α0 is defined as follows: Tk = α0 → α1 → . . . → αj → . . . → αk , where αj means the number of the state chosen, for j = 1, . . . , k. Assume that P (α0 = α) = pα , P (αj = β|αj−1 = α) = pαβ , where pα is the probability that the chain starts in state α and pαβ is the transition probability to state β after being in state α. Probabilities pαβ define a transition matrix P . In all algorithms used in this study we will consider a special choice of density distributions pi and pij defined as follows:  |vi | , v= |vi | v i=1 n

pi =

 |aij | ,  ai = |aij |.  ai  j=1 n

and pij =

(2)

Monte Carlo Numerical Treatment of Large Linear Algebra Problems

749

The specially defined Markov chain induces the following product of matrix/vector entrances and norms: Akv = vα0

k 

aαs−1 αs ;

 Akv = v  ×

s=1

k 

 aαs−1  .

s=1

We have shown in [5] that the value N N 1  (k) 1  (k) k k ¯ θ = θ = sign{Av }  Av  {hαk }i N i=1 i N i=1

(3)

can be considered as a MC approximation of the form (v, Ak h). For the probability error of this approximation one can have:   1   (k) RN = (v, Ak h) − θ¯(k)  = cp σ{θ(k) }N − 2 , where cp is a constant. In fact, (3) together with the sampling rules using probabilities (2) defines the MC algorithm used in our runs. Naturally, the quality of the MC algorithm depends on the behaviour of the standard deviation σ{θ(k) }. So, there is a reason to consider a special class of robust MC algorithms. Following [5] under robust MC algorithms we assume algorithms for which the standard deviation does not increase with the increasing of the matrix power k. So, robustness in our consideration is not only a characteristic of the quality of the algorithm. It also depends on the input data, i.e., on the matrix under consideration. As better balanced is the iterative matrix, and as smaller norm it has, as bigger a chances to get a robust MC algorithm.

3

Numerical Experiments

In this section we present results on experimental study of quality of the Markov chain Monte Carlo for large matrices. We run algorithms for evaluating mainly bilinear forms of matrix powers as a basic Monte Carlo iterative algorithms as well as the algorithm for evaluating the dominant eigenvalue. In our experiments we use dense and unstructured sparse matrices of sizes – n = 1000, n = 5000, n = 10000, n = 15000, n = 20000, n = 40000. We can control some properties of random matrices. Some of the matrices are well balanced (in some sense matrices are close to stochastic matrices), some of them are not balanced, and some are completely unbalanced. Some of the iterative matrices are with small norms which makes the Markov chain algorithm robust (as we showed in Section 2), and some of the matrices have large norms. Since the balancing is responsible for the variance of the random variable dealing with unbalanced matrices we may expect higher values for the stochastic error.

750

I. Dimov et al.

In such a way we can study how the stochastic error propagates with the number of Monte Carlo iterations for different matrices. Dealing with matrices of small norms we may expect high robustness of the Markov chain Monte Carlo and a high rate of convergence. To be able to compare the accuracy of various runs of the algorithm for computing bilinear forms of matrix powers we also compute them by a direct deterministic method using double precision. These runs are more time consuming since the computational complexity is higher than the complexity for Markov chain Monte Carlo, but we accept the results as ”exact results” and use them to analyse the accuracy of the results produced by our Monte Carlo code. For sparse matrices we use the Yale sparse matrix format [2]. We exploit the sparsity in sense that the used almost optimal Monte Carlo algorithm only deals with non-zero matrix entrances. The Yale sparse matrix format is very suitable since is allows to present large unstructured sparse matrices in a compact form in the processor’s memory [2]. The latter fact allows to perform jumps of the Markov chain from one to another non-zero elements of a given matrix very fast. We also study the scalability of the algorithms under consideration. We run our algorithms on different computer systems. The used systems are given below: – IBM p690+ Regatta system cluster of IBM SMP nodes, containing a total of 1536 IBM POWER5 processors; – Sun Fire 15K server with 52x0,9GHz UltraSPARC III processors; – SGI Prism equipped with 8 x 1.5 GHz Itanium II processors and 16 GByte of main memory. On Figure 1 we present results for the Monte Carlo solution of bilinear form of a dense matrix of size n = 15000 from the matrix power k (the matrix power corresponds to the number of moves in every Markov chain used in computations). The Monte Carlo algorithm used in calculations is robust. For comparison we present exact results obtained by deterministic algorithm with double precision. One can not see any differences on this graph. As one can expect the error of the robust Monte Carlo is very small and it decreases with increasing the matrix power k. In fact the stochastic error exists but it increases rapidly with increasing of k. This fact is shown more precisely on Figure 2. (k) The Monte Carlo probability error RN and the Relative Monte Carlo prob(k) ability error RelN was computed in the following way:   N 1  (v, Ak h)  (v, h)  (k) (k) (k) (k) R . θi − RN =   , RelN = k h) N  N (v, h) (v, A i=1 In the robust case the relative MC error decreases to values smaller than 10−22 when the number of MC iterations is 20, while for in the non-robust case the corresponding values slightly increase with increasing of matrix power (for k = 20 the values are between 10−3 and 10−2 (see Figure 2). If we apply both the robust and the non-robust Markov chain Monte Carlo to compute the dominant eigenvalue of the same dense matrices of size n = 15000 then we get the result presented on Figure 3.

Monte Carlo Numerical Treatment of Large Linear Algebra Problems

751

1 n=15000, Exact resusts, Dense matrices n=15000, Robust M.C. algorithm, Dense matrices 0.01 0.0001 1e-06

Result

1e-08 1e-10 1e-12 1e-14 1e-16 1e-18 1e-20 1e-22 0

2

4

6

8

10

12

14

16

18

20

K

Fig. 1. Comparison of Robust Monte Carlo algorithm results with the exact solution for the bilinear form of a dense matrix of size n = 15000 1 n=15000, Robust M.C. algorithm, Dense matrices n=15000, Nonrobust M.C. algorithm, Dense matrices

1e-05

error

1e-10

1e-15

1e-20

1e-25 0

2

4

6

8

10

12

14

16

18

20

K

Fig. 2. Comparison of the Relative MC error for the robust and non-robust algorithms. Matrices of size n = 15000 are used.

We should stress on the fact that the random matrices are very similar. Only the difference is that in the robust case the matrix is well balanced. From Figure 3 one can see that the oscillations of the solution are much smaller when the matrix

752

I. Dimov et al.

0.0006 n=15000, Robust M.C. algorithm, Dense matrices n=15000, Nonrobust M.C. algorithm, Dense matrices 0.0004

0.0002

error

0

-0.0002

-0.0004

-0.0006

-0.0008 0

2

4

6

8

10

12

14

16

18

K

Fig. 3. The relative MC error for the robust and non-robust algorithms. The matrix size is n = 15000.

is well balanced. The reason for that is that the variance for the well balanced matrix is much smaller than for non-balanced matrix. Very similar results are obtained for matrices of size 1000, 5000, 10000, 20000, and 40000. Some results for sparse matrices are plotted on Figures 4, and 5. Results of Monte Carlo computation of the bilinear form of an unstructured sparse matrix of size 10000 are plotted on Figure 4. The MC results are compared with the exact results. On this graph one can not find any differences between MC results and the exact solution. One can see that if the robust algorithm is applied for solving systems of linear algebraic equations or for computing the dominant eigenvalue of real symmetric matrices (in order to get real eigenvalues), then just 5 or 6 Monte Carlo iterations are enough to get fairly accurate solution (with 4 right digits). If we present the same results for the same sparse matrix in a logarithmic scale, then one can see that after 20 iterations the relative MC error is smaller than 10−20 since the algorithm is robust and with increasing the number of iterations the stochastic error decreases dramatically. Similar results for a random sparse matrix of size 40000 are shown on Figure 5. Since the algorithm is robust and the matrix is well balanced the results of MC computations are very closed to the results of deterministic algorithm performed with a double precision. Our observation from the numerical experiments performed are that the error increases linearly if k is increasing. The larger the matrix is, the smaller the influence of non-balancing is. This is also an expected result since the stochastic error is proportional to the standard deviation of the random variable computed as a weight of the Markov chain. When a random matrix is very large it is becoming

Monte Carlo Numerical Treatment of Large Linear Algebra Problems

753

0.12 n=10000, Exact results, Dense matrices n=10000, Robust M.C. algorithm, Sparse matrix approximation 0.1

Result

0.08

0.06

0.04

0.02

0 0

2

4

6

8

10

12

14

16

18

20

K

Fig. 4. Comparison of the MC results for bilinear form of matrix powers for a sparse matrix of size n = 10000 with the exact solution. 5 or 6 Monte Carlo iterations are enough for solving the system of linear algebraic equations or for computing the dominant eigenvalue for the robust Monte Carlo. 1 n=40000, Exact results, Sparse matrices n=40000, Robust M.C., Sparse matrices 0.1 0.01 0.001

Result

0.0001 1e-05 1e-06 1e-07 1e-08 1e-09 1e-10 1

2

3

4

5

6

7

8

9

10

K

Fig. 5. Comparison of the MC results for bilinear form of matrix powers for a sparse matrix of size n = 40000 with the exact solution

754

I. Dimov et al.

more close (in some sense) to the stochastic matrix and the standard deviation of the random variable statistically decreases which statistically increases the accuracy.

4

Conclusion

In this paper we have analysed the performance of the proposed MC algorithm for linear algebra problems. We are focused on the computing bilinear form of matrix powers (v, Ak h) as a basic subtask of MC algorithms for solving a class of Linear Algebra problems. We study the applicability and robustness of Markov chain Monte Carlo. The robustness of the Monte Carlo algorithm with large dense and unstructured sparse matrices has been demonstrated. It’s an important observation that the balancing of the input matrix is very important for MC computations since it decreases the stochastic error and improved the robustness.

References 1. V. Alexandrov, E. Atanassov, I. Dimov: Parallel Quasi-Monte Carlo Methods for Linear Algebra Problems, Monte Carlo Methods and Applications, Vol. 10, No. 3-4 (2004), pp. 213-219. 2. R. E. Bank and C. C. Douglas: Sparse matrix multiplication package (SMMP), Advances in Computational Mathematics, Vol. 1, Number 1 / February (1993), pp. 127-137. 3. I. Dimov: Minimization of the Probable Error for Some Monte Carlo methods. Proc. Int. Conf. on Mathematical Modeling and Scientific Computation, Albena, Bulgaria, Sofia, Publ. House of the Bulgarian Academy of Sciences, 1991, pp. 159-170. 4. I. Dimov: Monte Carlo Algorithms for Linear Problems, Pliska (Studia Mathematica Bulgarica), Vol. 13 (2000), pp. 57-77. 5. I. Dimov, V. Alexandrov, S. Branford, and C. Weihrauch: Error Analysis of a Monte Carlo Algorithm for Computing Bilinear Forms of Matrix Powers, Computational Science (V.N. Alexandrov et al. Eds.), Lecture Notes in Computing Sciences, Springer-Verlag Berlin Heidelberg, Vol. 3993, (2006), 632-639. 6. C. Weihrauch, I. Dimov, S. Branford, and V. Alexandrov: Comparison of the Computational Cost of a Monte Carlo and Deterministic Algorithm for Computing Bilinear Forms of Matrix Powers, Computational Science (V.N. Alexandrov et al. Eds.), Lecture Notes in Computing Sciences, Springer-Verlag Berlin Heidelberg, Vol. 3993, (2006), 640-647. 7. I. Dimov, A. Karaivanova: Parallel computations of eigenvalues based on a Monte Carlo approach, Journal of Monte Carlo Method and Applications, Vol. 4, Nu. 1, (1998), pp. 33-52. 8. J.R. Westlake: A Handbook of Numerical matrix Inversion and Solution of Linear Equations, John Wiley & Sons, inc., New York, London, Sydney, 1968. 9. M. Mascagni, A. Karaivanova: A Parallel Quasi-Monte Carlo Method for Computing Extremal Eigenvalues, Monte Carlo and Quasi-Monte Carlo Methods (2000), Springer, pp. 369-380.

Simulation of Multiphysics Multiscale Systems: Introduction to the ICCS’2007 Workshop Valeria V. Krzhizhanovskaya1 and Shuyu Sun2 1

Section Computational Science, Faculty of Science, University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, The Netherlands [email protected] 2 Department of Mathematical Sciences, Clemson University O-221 Martin Hall, Clemson, SC 29634-0975, USA [email protected]

Abstract. Modeling and simulation of multiphysics multiscale systems poses a grand challenge to computational science. To adequately simulate numerous intertwined processes characterized by different spatial and temporal scales (often spanning many orders of magnitude), sophisticated models and advanced computational techniques are required. The aim of the workshop on Simulation of Multiphysics Multiscale Systems (SMMS) is to facilitate the progress in this multidisciplinary research field. We provide an overview of the recent trends and latest developments, with special emphasis on the research projects selected for presentation at the SMMS'2007 workshop. Keywords: Multiphysics, Multiscale, Complex systems, Modeling, Simulation, ICCS, Workshop.

1 Introduction Real-life processes are inherently multiphysics and multiscale. From atoms to galaxies, from amino-acids to living organisms, nature builds systems that involve interactions amongst a wide range of physical phenomena operating at different spatial and temporal scales. Complex flows, fluid-structure interactions, plasma and chemical processes, thermo-mechanical and electromagnetic systems are just a few examples essential for fundamental and applied sciences. Numerical simulation of such complex multiscale phenomena is vital for better understanding Nature and for advancing modern technologies. Due to the tremendous complexity of multiphysics multiscale systems, adequate simulation requires development of sophisticated models and smart methods for coupling different scales and levels of description (nano-micro-mesomacro). Until recently, such coupled modeling has been computationally prohibitive. But spectacular advances in computer performance and emerging technologies of parallel distributed grid computing have provided the community of computational physicists with the tools to break the barriers and bring simulation to a higher level of detail and accuracy. On the other hand, this progress calls for new efficient numerical algorithms and advanced computational techniques specific to the field where coupling different models or scales within one simulation is essential. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 755 – 761, 2007. © Springer-Verlag Berlin Heidelberg 2007

756

V.V. Krzhizhanovskaya and S. Sun

In the last decade, modeling and simulation showed a clear trend away from simplified models that treat the processes on a single scale toward advanced self-adapting multiscale and multi-model simulations. The importance of such advanced computer simulations is recognized by various research groups and supported by national and international projects, e.g. the Dutch Computational eScience Initiative [1], the SCaLeS initiative in the USA [2] and the Virtual Physiological Human EU project [3]. Many significant developments were accomplished as a result of joint efforts in the multidisciplinary research society of physicists, biologists, computational scientists and computer experts. To boost scientific cross-fertilization and promote collaboration of these diverse groups of specialists, we have launched a series of mini-symposia on Simulation of Multiphysics Multiscale Systems (SMMS) in conjunction with the International Conference on Computational Sciences (ICCS) [4]. The fourth workshop in this series, organized as part of ICCS'2007, expands the scope of the meeting from physics and engineering to biological and biomedical applications. This includes computational models of tissue- and organo-genesis, tumor growth, blood vessel formation and interaction with the hosting tissue, biochemical transport and signaling, biomedical simulations for surgical planning, etc. The topics traditionally addressed by the symposium include modeling of multiphysics and/or multiscale systems on different levels of description, novel approaches to combine different models and scales in one problem solution, advanced numerical methods for solving multiphysics multiscale problems, new algorithms for parallel distributed computing specific to the field, and challenging multiphysics multiscale applications from industry and academia. A large collection of rigorously reviewed papers selected for the workshops highlights modern trends and recent achievements [5]. It shows in particular the progress made in coupling different models (such as continuous and discrete models; quantum and classical approaches; deterministic and stochastic techniques; nano, micro, meso and macro descriptions) and suggests various coupling approaches (e.g. homogenization techniques, multigrid and nested grids methods, variational multiscale methods; embedded, concurrent, integrated or hand-shaking multiscale methods, domain bridging methods, etc.). A selected number of papers have been published in the special issues of International Journal for Multiscale Computational Engineering [6-7], collecting state-of-the-art methods for multiscale multiphysics applications covering a large spectrum of topics such as multiphase flows, discharge plasmas, turbulent combustion, chemical vapor deposition, fluid-structure interaction, thermo-mechanical and magnetostrictive systems, and astrophysics simulation. In this paper we overview the latest developments in modeling and simulation of multiphysics multiscale systems exemplified by the research presented at the SMMS'2007 workshop.

2 Overview of Work Presented in This Workshop The papers presented in this workshop cover state-of-the-art simulations of multiphysics multiscale problems; they represent ongoing research projects on various important topics relevant to the modeling and computation of these complex systems. Numerical simulations of these problems require two essential components. The first one is the development of sophisticated models for each physical process,

SMMS: Introduction to the ICCS’2007 Workshop

757

characterized by its own specific scales and its own mechanisms, and integration of these models into one seamless simulation. Coupling or extension of atomistic and continuum models studied in [8-14] shows that sophisticated modeling is essential to accurately represent the physical world. Similarly, works in [15-18] demonstrate that biological or biomedical systems have intrinsically multiscale nature and require multiscale modeling. The second essential component for numerical simulation of multiphysics and multiscale problems includes efficient numerical algorithms and advanced computational techniques. Computational methodologies and programming tools [19-21] and advanced mathematical and numerical algorithms [22-25] are indispensable for efficient implementation of multiscale multiphysics models, which are computationally very intensive and often intractable using ordinary methods. Cellular automata [11,26,27] and the lattice Boltzmann method [28-31], which can be considered both as modeling tools and as numerical techniques, prove to be very powerful and promising in modeling complex flows and other multiscale complex systems. The projects in [8-10] investigate computationally efficient yet physically meaningful ways of coupling discrete and continuum models across multiple scales. Another way of treating multiscale problems is to develop single-scale approximation models. Papers [12,13] present development and analysis of models at an atomic or molecular scale, while project [11] couples multiple continuum models at a macroscopic scale. In [8], an adaptively coupled approach is presented for compressible viscous flows, based on the overlapped Schwarz coupling procedure. The continuum domain is described by Navier-Stokes equations solved using a finite volume formulation in compressible form to capture the shock, and the molecular domain is solved by the Direct Simulation Monte Carlo method. Work conducted in [9] leads to development and application of two computational tools linking atomistic and continuum models of gaseous systems: the first tool, a Unified Flow Solver for rarefied and continuum flows, is based on a direct Boltzmann solver and kinetic CFD schemes, and the second tool is a multiscale computational environment integrating CFD tools with Kinetic Monte Carlo and Molecular Dynamics tools. Paper [10] describes an application of the Unified Flow Solver (UFS) for complex gas flows with rarefied and continuum regions. The UFS methodology is based on the direct numerical solution of the Boltzmann equation for rarefied flow domains and the kinetic schemes of gas dynamics (for the Euler or Navier-Stokes equations) for continuum flow domains. In [13], molecular dynamics simulations are extended to slow dynamics that could arise in materials science, chemistry, physics and biology. In particular, the hyperdynamics method developed for low-dimension energy-dominated systems is extended to simulate slow dynamics in atomistic general systems. In [12], a new isothermal quantum Euler model is derived and the asymptotic behavior of the quantum Euler system is formally analyzed in the semiclassical and zero-temperature limits. To simulate the process of biomass conversion [14], a submodel is developed for reverse combustion process in a solid fuel layer on the grate. It gives good predictions for the velocity of combustion front and spatial profiles of porosity, oxygen fraction and temperature, which are essential inputs for NOx calculations. Multiscale approaches proved to be very useful for modeling and simulation of biological and biomedical systems [15-18]. In [15], a multiscale cell-based model is presented that addresses three stages of cancer development: avascular tumor growth, tumor-induced angiogenesis, and vascular tumor growth. The model includes the

758

V.V. Krzhizhanovskaya and S. Sun

following three levels that are integrated through a hybrid MPI parallel scheme: the intracellular regulations that are described by Boolean networks, the cellular level growth and dynamics that are described by a lattice Monte Carlo model, and the extracellular dynamics of the signaling molecules and metabolites that are described by a system of reaction-diffusion equations. The work [17] is related to the analysis of dynamics of semi-flexible polymers, such as DNA molecules. A new efficient approximate technique predicts material properties of the polymeric fluids accounting for internal viscosity. The results explain the phenomenon of shear thinning and provide better predictions compared to the traditional techniques. In [16], coupled autoregulated oscillators in a single- and multi-cellular environment are modeled, taking into consideration intrinsic noise effects in genetic regulation, characterized by delays due to the slow biochemical processes. Diverse disciplines including physiology, biomechanics, fluid mechanics and simulation are brought together in [18] to develop a predictive model of the behavior of a prosthetic heart valve by applying simulation techniques for the study of cardiovascular problems, such as blood clotting. A commercial finite volume computational fluid dynamics code ANSYS/CFX is used for the 3D components of the model. Advanced mathematical and numerical algorithms are required for effective coupling of various models across multiple scales and for efficient reduction of the computations needed for fine scale simulations without loss of accuracy [22-25]. As a significant extension to the classical multiscale finite element methods, paper [24] is devoted to the development of a theoretical framework for multiscale Discontinuous Galerkin (DG) methods and their application to efficient solutions of flow and transport problems in porous media with interesting numerical examples. In this work, local DG basis functions at the coarse scale are first constructed to capture the local properties of the differential operator at the fine scale, and then the DG formulations using the newly constructed local basis functions instead of conventional polynomial functions are solved on the coarse scale elements. In [23], an efficient characteristic finite element method is proposed for solving the magnetic induction equation in magnetohydrodynamics, with numerical results exhibiting how the topological structure and energy of the magnetic field evolve for different resistivity scales. Paper [22] includes a fast Fourier spectral technique to simulate the Navier-Stokes equations with no-slip boundary conditions, enforced by an immersed boundary technique called volume-penalization. In [25], a deflation technique is proposed to accelerate the iterative processing of the linear system built from discretization of the pressure Poisson equation with bubbly flow problems. A number of computational methodologies and programming tools have been developed for simulations of multiscale multiphysics systems [19-21]. In the mesh generation technique presented in [21], surface reconstruction in applications involving complex irregular domains is considered for modeling biological systems, and an efficient and relatively simple approach is proposed to automatically recover a high quality surface mesh from low-quality non-consistent inputs that are often obtained via 3-D acquisition systems like magnetic resonance imaging, microscopy or laser scanning. In [20], a new methodology for two-way connection of microscopic model and macroscopic model, called Macro-Micro Interlocked simulation, is presented for multiscale simulations, together with demonstration of the applicability of the methodology for the various phenomena, such as cloud formation in atmosphere, gas

SMMS: Introduction to the ICCS’2007 Workshop

759

detonation, aurora, solid friction, and onset of solar flares. Paper [19] addresses the challenge arising from the intercomponent data exchanges among components of multiscale models and the language interoperability between their various constituent codes. This work leads to the creation of a set of interlanguage bindings for a successful parallel coupling library, the Model Coupling Toolkit. Automaton, a mathematical model for a finite state machine, has been studied as a paradigm for modeling multiscale complex systems [11,26,27]. Systems considered in [26] arise from the modeling of weed dispersal. In this work, the systems are approximated by pathways through a network of cells, and the results of simulations provide evidence that the method is suitable for modeling weed propagation mechanisms using multiple scales of observation. In [27], complex automata are formalized with identification of five classes of scale separation and further investigation of the scale separation map in relation with its capability to specify its components. Efforts are spent in [11] on the application of macroscopic modeling with cellular automata to simulation of lava flows, which consist of unconfined multiphase streams, the characteristics of which vary in space and time as a consequence of many interrelated physical and chemical phenomena. The lattice Boltzmann method, being a discrete computational method based upon the Boltzmann equation, is a powerful mesoscopic technique for modeling a wide variety of complex fluid flow problems. In addition to its capability to accommodate a variety of boundary conditions, this approach is able to bridge microscopic phenomena with the continuum macroscopic equations [28-31]. In [28], the problem of mixed convection in a driven cavity packed with porous medium is studied. A lattice Boltzmann model for incompressible flow in porous media and another thermal lattice Boltzmann model for solving energy equation are proposed based on the generalized volume-averaged flow model. Project [31] presents a model for molecular transport effects on double diffusive convection; in particular, this model is intended to access the impact of variable molecular transport effects on the heat and mass transfer in a horizontal shallow cavity due to natural convection of a binary fluid. In [29], a multiscale approach is applied to model the polymer dynamics in the presence of a fluid solvent, combining Langevin molecular dynamics techniques with a mesoscopic lattice Boltzmann method for the solvent dynamics. This work is applied in the interesting context of DNA translocation through a nanopore. In [30], the lattice Boltzmann method for convection-diffusion equation with source term is applied directly to solve some important nonlinear complex equations by using complex-valued distribution function and relaxation time.

3 Conclusions The progress in understanding physical, chemical, biological, sociological and even economical processes is strongly dependent on adequacy and accuracy of numerical simulation. All the systems important for scientific and industrial applications are essentially multiphysics and multiscale: they are characterized by the interaction of a great number of intertwined processes that operate at different spatial and temporal scales. Modern simulation technologies make efforts to bridge the gaps between different levels of description, and to seamlessly combine the scales spanning many

760

V.V. Krzhizhanovskaya and S. Sun

orders of magnitude in one simulation. The

progress in developing multiphysics multiscale models and specific numerical methodologies is exemplified by the projects presented in the SMMS workshops [4]. Acknowledgments. We would like to thank the participants of our workshop for their inspiring contributions, and the members of the Program Committee for their diligent work, which led to the very high quality of the conference. Special thanks go to Alfons Hoekstra for his efficient and energetic work on preparing the SMMS'2007 workshop. The organization of this event was partly supported by the NWO/RFBR projects # 047.016.007 and 047.016.018, and the Virtual Laboratory for e-Science Bsik project.

References* 1. P.M.A. Sloot et al. White paper on Computational e-Science: Studying complex systems in silico. A National Research Initiative. December 2006 http://www.science.uva.nl/ research/pscs/papers/archive/Sloot2006d.pdf 2. SCaLeS: A Science Case for Large Scale Simulation: http://www.pnl.gov/scales/ 3. N. Ayache et al. Towards Virtual Physiological Human: Multilevel modelling and simulation of the human anatomy and physiology. White paper. 2005. http://ec.europa.eu/information_society/activities/health/docs/events/barcelona2005/ ec-vph-white-paper2005nov.pdf 4. http://www.science.uva.nl/~valeria/SMMS 5. LNCS V. 3992/2006 DOI 10.1007/11758525, pp. 1-138; LNCS V. 3516/2005 DOI 10.1007/b136575, pp. 1-146; LNCS V. 3039/2004 DOI 10.1007/b98005, pp. 540-678 6. V.V. Krzhizhanovskaya, B. Chopard, Y.E. Gorbachev (Eds.) Simulation of Multiphysics Multiscale Systems. Special Issue of International Journal for Multiscale Computational Engineering. V. 4, Issue 2, 2006. DOI: 10.1615/IntJMultCompEng.v4.i2 7. V.V. Krzhizhanovskaya, B. Chopard, Y.E. Gorbachev (Eds.) Simulation of Multiphysics Multiscale Systems. Special Issue of International Journal for Multiscale Computational Engineering. V. 4, Issue 3, 2006. DOI: 10.1615/IntJMultCompEng.v4.i3 8. G. Abbate, B.J. Thijsse, and C.R.K. Kleijn. Coupled Navier-Stokes/DSMC for transient and steady-state flows. Proceedings ICCS'2007. 9. V.I. Kolobov, R.R. Arslanbekov, and A.V. Vasenkov. Coupling atomistic and continuum models for multi-scale simulations. Proceedings ICCS'2007. 10. V.V. Aristov, A.A. Frolova, S.A. Zabelok, V.I. Kolobov, R.R. Arslanbekov. Simulations of multiscale gas flows on the basis of the Unified Flow Solver. Proceedings ICCS'2007. 11. M.V. Avolio, D. D’Ambrosio, S. Di Gregorio, W. Spataro, and R. Rongo. Modelling macroscopic phenomena with cellular automata and parallel genetic algorithms: an application to lava flows. Proceedings ICCS'2007. 12. S. Gallego, P. Degond, and F. Mhats. On a new isothermal quantum Euler model: Derivation, asymptotic analysis and simulation. Proceedings ICCS'2007.

*

All papers in Proceedings of ICCS'2007 are presented in the same LNCS volume, following this paper.

SMMS: Introduction to the ICCS’2007 Workshop

761

13. X. Zhou and Y. Jiang. A general long-time molecular dynamics scheme in atomistic systems. Proceedings ICCS'2007. 14. R.J.M. Bastiaans, J.A. van Oijen, and L.P.H. de Goey. A model for the conversion of a porous fuel bed of biomass. Proceedings ICCS'2007. 15. Y. Jiang. Multiscale, cell-based model for cancer development. Proceedings ICCS'2007. 16. A. Leier and P. Burrage. Stochastic modelling and simulation of coupled autoregulated oscillators in a multicellular environment: the her1/her7 genes. Proceedings ICCS'2007. 17. J. Yang and R.V.N. Melnik. A new model for the analysis of semi-flexible polymers with internal viscosity and applications. Proceedings ICCS'2007. 18. V. Diaz Zuccarini. Multi-physics and multiscale modelling in cardiovascular physiology: Advanced users methods of biological systems with CFX. Proceedings ICCS'2007. 19. E.T. Ong et al. Multilingual interfaces for parallel coupling in multiphysics and multiscale systems. Proceedings ICCS'2007. 20. K. Kusano, A. Kawano, and H. Hasegawa. Macro-micro interlocked simulations for multiscale phenomena. Proceedings ICCS'2007. 21. D. Szczerba, R. McGregor, and G. Szekely. High quality surface mesh generation for multi-physics bio-medical simulations. Proceedings ICCS'2007. 22. G.H. Keetels, H.J.H. Clercx, and G.J.F. van Heijst. A Fourier spectral solver for wall bounded 2D flow using volume-penalization. Proceedings ICCS'2007. 23. J. Liu. An efficient characteristic method for the magnetic induction equation with various resistivity scales. Proceedings ICCS'2007. 24. S. Sun. Multiscale discontinuous Galerkin methods for modeling flow and transport in porous media. Proceedings ICCS'2007. 25. J.M. Tang and C. Vuik. Acceleration of preconditioned Krylov solvers for bubbly flow problems. Proceedings ICCS'2007. 26. A.G. Dunn and J.D. Majer. Simulating weed propagation via hierarchical, patch-based cellular automata. Proceedings ICCS'2007. 27. A.G. Hoekstra. Towards a complex automata framework for multi-scale modelling formalism and the scale separation map. Proceedings ICCS'2007. 28. Z. Chai, B. Shi, and Z. Guo. Lattice Boltzmann simulation of mixed convection in a driven cavity packed with porous medium. Proceedings ICCS'2007. 29. S. Melchionna, E. Kaxiras, and S. Succi. Multiscale modeling of biopolymer translocation through a nanopore. Proceedings ICCS'2007. 30. B.C. Shi. Lattice Boltzmann simulation of some nonlinear complex equations. Proceedings ICCS'2007. 31. X.M. Yu, B.C. Shi, and Z.L. Guo. Numerical study of molecular transport effects on double diffusive convection with lattice Boltzmann method. Proceedings ICCS'2007.

Simulating Weed Propagation Via Hierarchical, Patch-Based Cellular Automata Adam G. Dunn and Jonathan D. Majer Alcoa Research Centre for Stronger Communities and Department of Environmental Biology, Curtin University of Technology, Western Australia {A.Dunn, J.Majer}@curtin.edu.au

Abstract. Ecological systems are complex systems that feature heterogeneity at a number of spatial scales. Modelling weed propagation is difficult because local interactions are unpredictable, yet responsible for global patterns. A patch-based and hierarchical cellular automaton using probabilistic connections suits the nature of environmental weed dispersal mechanisms. In the presented model, weed dispersal mechanisms, including human disturbance and dispersal by fauna, are approximated by pathways through a network of cells. The results of simulations provide evidence that the method is suitable for modelling weed dispersal mechanisms using multiple scales of observation. Keywords: Environmental weeds, hierarchical patch dynamics, cellular automata, multiscale heterogeneity

1

Introduction and Context

Weed establishment and spread is a significant issue in Western Australia (WA), especially in the region that is known as an international biodiversity hotspot [1]. Biodiversity describes the state of an ecosystem in terms of the natural complexity through which it evolves over time. Environmental weeds are plants that degrade an ecosystem via simplification; these weeds compete more effectively than their native counterparts in an environment that is foreign to them [2]. Along the south coast of WA, local and state governments, the Gondwana Link initiative [3] and other community groups benefit from a predictive analysis of the landscape-scale effects of their decisions on the propagation of weeds. An ecological system is a complex system [4] and the intricate web of interactions between species of flora and fauna form a resilient system. A system’s degradation by weeds makes it more susceptible to further degradation. Weeds propagate by dispersal and are constrained by competition with other plants and by predation [5]. Significant dispersal vectors include seed-eating birds and grazing animals, human transport networks, watercourses, wind and agricultural disturbances. Each of these dispersal vectors acts in a different manner and often at different spatial and temporal scales. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 762–769, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Simulating Weed Propagation

763

The approach to modelling the landscape described here is novel because it extends the cellular automata formalism to create a cellular space that is hierarchical and irregular. As a result, the structure captures the patchy nature of the landscape realistically, as well as being capable of representing GIS data with a variety of levels granularity in a single structure.

2

Background

Ecosystems are complex systems in which the spatial dynamics may be described by a hierarchy of patches. Existing models of spatial ecosystem dynamics vary considerably in structure; two-dimensional raster grids that discretise mechanisms of competition or propagation are typical. Hierarchical patch dynamics is a modelling hypothesis for ecosystems that matches the physical spatial structure of ecosystems and other landscape patterns, such as land use and urbanisation. Ecological systems are heterogeneous, complex systems that are neither completely regular nor completely random [6]. They exhibit a natural patchiness [7] that is defined by the processes of dispersal and competition, and are influenced by soil structure and natural resources, fire and forestry regimes, other human disturbances and climate. Distinct and abrupt changes between vegetation type are typical (from woodland to grassland, for example) and a mosaic of vegetation types over a landscape is a natural feature of an ecological system, besides being a feature of human-modified landscapes. Ecological systems display multiscale heterogeneity — when viewing a system at a single level of spatial granularity, the landscape may be segmented into a set of internally homogeneous patches. To be internally homogeneous, a patch need only have an aggregable set of properties that are considered to be consistent throughout the patch at that level. For example, a coarse granularity view of a landscape may be segmented into regions of forest, woodland and grassland, whereas an individual patch of forest may be segmented by the distribution of species or even by individual plants. In the case of weed propagation, the data quality issue is one of uncertainty at the microscale. In broad terms, weed propagation is the combination of the processes of germination, competition for resources and seed dispersal, as well as the uncertain long distance dispersal (LDD) events [8]. The typical approach for modelling the dispersal of any plant is to use empirical information to build a function that aggregates the several modes of seed dispersal by likelihood for a homogeneous landscape. The ‘observational reality’ of weed dispersal phenomena is one of uncertainty and multiple scales. At the scale of individual plants, the effects of individual birds, other animals, water flow, or human disturbances are essentially ‘random events’ in the colloquial sense of the phrasing. At a slightly coarser scale, the dispersal mechanisms for weeds become more stable in the sense that a model recovers some notional sense of predictability; mechanisms are described using a radius of spread or a likely direction of spread, and this is the operational scale of weed propagation [9].

764

A.G. Dunn and J.D. Majer

Given the need to capture multiscale heterogeneity and the patchiness of ecological systems, the hierarchical patch dynamics formalism [10,11] may be used to model weed propagation. Hierarchical patch dynamics is a modelling method that captures the patchiness and multiscale heterogeneity of ecological systems as a hierarchy of interacting, heterogeneous (and internally homogeneous) patches. The hierarchical patch dynamics method may be implemented as a cellular automaton [12], modified to be irregular, probabilistic and hierarchical. Coarse patches comprise several finer patches in a hierarchy. Regular-grid cellular automata introduce a bias in propagation [13], in the absence of any other method to avoid it, such as stochastic mechanisms in timing or update. Introducing stochasticity implicitly via irregularity or asynchronicity, or explicitly via a probabilistic update can restore isotropy in a cellular automata model of propagation [14]. The model described in the following section includes stochasticity to match the spatial nature of an ecological landscape by following the hierarchical patch dynamics concept.

3

Method

The model is constructed as a network of both interlevel connections forming a hierarchy and intralevel connections forming a graph. The structure of the model captures multiple scales of observation from GIS data, relating the levels to each other via the process of abstraction. Simulations are run by traversing the structure to determine the updated state of the network, which involves discovering the likelihood of dispersal through the heterogeneous and multiscale landscapes. The approach is analogous to existing models that use dispersal curves [15] but they additionally capture the dispersal modes explicitly rather than aggregating them into a single function, and they effectively manage the multiscale heterogeneity. A cell is defined by its state, a set of connections to its neighbourhood and its Cartesian coordinates i, j ∈ R. The state of a cell s ∈ S, is defined by the static physical properties taken from GIS data of a specific level of abstraction associated with its level of abstraction; S represents the union of the set of possible states from each level (for example, |S| = 13 in Fig. 2). The extent of a cell is defined by the Voronoi decomposition [16] of the landscape at the level to which the cell belongs and is therefore dependent on the locations of the nodes in the cell’s neighbourhood. The intralevel neighbourhood of a cell is defined by the Delaunay triangulation of the subset of nodes in the graph and the interlevel neighbourhood is defined by the process of abstraction in which the hierarchy is created by physical state information. An abstraction necessarily involves three homogeneous cells belonging to a single triangle in the Delaunay triangulation. Using this definition, the cells composed to create a new cell are guaranteed to be neighbours, they will represent an internally homogeneous environment (for the given level of abstraction) and will create a new level of abstraction in which fewer cells are used to represent

Simulating Weed Propagation

765

the landscape and the average size of a cell increases. A single abstraction is illustrated in Fig. 1. The extents of the cells are given by the Voronoi decomposition and the connectivity is given by the Delaunay triangulation. In the figure, the cell d is now related to cells a, b and c with a ‘composed of’ relationship and three interlevel connections are formed, contributing to the structure of the hierarchy. Note that the choice of ternary trees over other forms is the simplest choice given the triangulation method; other forms produce the same shapes using a centroid method for defining the new cell’s node location.

node boundary between a and b

c

intralevel connection

b

b

c

d

a

new cell boundary new node

a

interlevel connection Fig. 1. A single abstraction is shown on a detail view of one landscape. The top two illustrations are the Voronoi decomposition of a set of nodes. The bottom two illustrations are a subset of the Delaunay triangulation for a set of nodes. The cells a, b and c in the left illustrations are abstracted to form cell d as given on both of the illustrations on the right. Cell d has a larger size than cells a, b and c (taken individually) and the distance to the nearest cells increases.

The result of multiple abstractions creates a hierarchy that builds a set of complete ternary trees. A set of points is initially distributed as a set of Halton points and each abstraction is made for a group of three internally homogeneous cells using the GIS data as a guide. The abstraction process continues until there are no more groups of homogeneous cells that may be abstracted such that the newly created distances between nodes are above a critical distance (representative of the operational scale). The above abstraction process is repeated for the several levels of GIS data until the coarsest GIS data is captured within the hierarchical structure. In Fig. 2 an example landscape is presented for for three levels of fictitious GIS data captured as raster information (see Fig. 3 for two intermediate levels of abstraction within the hierarchical structure that is built for this data).

766

A.G. Dunn and J.D. Majer vegetation survey

satellite imagery

b d

c

e a

land tenure data

g

h

f

l

k m

i j

Fig. 2. Fictitious satellite imagery, vegetation survey data and land tenure data of the same landscape. Three levels of an example landscape are illustrated with the following categorisations: a pasture and crops, b natural vegetation, c bare or disturbed, d impermeable or built up, e type 1 vegetation, f crop or pasture, g road, h type 2 vegetation i urban, j peri-urban, k nature reserve, l private agricultural, m private residential.

The structure of the landscape model provides a useful multiscale description of the environment for multiple scales of observation (based on GIS data) but it does not, alone, provide a method for simulating weed propagation for the variety of dispersal modes, each with their own operational scale. The simulation method chosen for the implementation described here is to associate a probability with each of the connections in the network — the interlevel hierarchy and intralevel graphs combined. At each time step (a typical synchronous timing mechanism is adopted) and for each cell, a breadth-first search through the network is used to discover the avenues of possible dispersal and the likelihood value. This path mimics the seed dispersal curves and offers a unique approach to modelling LDD via its inherent uncertainty. Probabilities for each of the connections are determined by the state information of the connected cells and the distance between their nodes. For example, given the higher density of seed rain (dispersal by seed-eating birds perching on trees) near the periphery of remnant vegetation [17] the likelihood of dispersal between remnant vegetation and most other types of landscape is given a higher value than between two remnant vegetation cells at the vegetation survey scale. The operational scale here has a finer granularity than human dispersal modes, which occur over both longer distances and with higher uncertainty. This is analogous to dispersal curve models of short and LDD. There are three significant assumptions made about the landscape and the process of dispersal for the implementation described here. Firstly, it is assumed that several processes including dispersal, seed predation, germination, seasonal variance and seed bank dynamics may be aggregated into a single dispersal mechanism. Secondly, it is assumed that the operational scale of the variety of dispersal modes exist within a reasonable expanse of orders of magnitude. Computer simulation would be too computation-intensive if the operational scales included millimetres and hundreds of kilometres (and their associated time scales). Lastly,

Simulating Weed Propagation

767

it is assumed for these experiments that the landscape is static except for the weed propagation — ceteris paribus has been invoked in a heavy fashion for this experiment.

4

Results and Discussion

In testing this implementation of hierarchical patch dynamics, the results of simulations demonstrate that the method is sensitive to the choice of granularity, but that the methodology is capable of representing the multiple dispersal mechanisms and the dynamics of a weed population. Importantly, the simulations demonstrate that there is advantage in the rigorous mapping between the operational scales of the dispersal phenomena and the observational scales captured by the GIS data. The approach is useful specifically because it captures dispersal modes and multiple scales explicitly, combining them in a framework that provides the aforementioned rigorous mapping. The results of one simulation are presented in Fig. 3 — the simulation uses the fictitious GIS data depicted in Fig. 2. In this example, an environmental weed is given properties that are typical of a fruit-bearing weed whose dispersal is dependent on birds, small mammals and human disturbance routines. A weed of this type is likely to spread along the periphery of remnant vegetation because of the desirability of the perching trees for birds in these regions, and be found more commonly in and near regions with higher populations and greater frequencies of disturbance. The probabilities associated with remnant vegetation ↔ crop/pasture are relatively high, crop/pasture ↔ crop/pasture are zero and any connections within or between frequently disturbed patches are associated with relatively high probabilities. Since the weed and the landscape are both fictitious, the results of the experiment are not easily verified; instead this experiment represents a proof of concept for the modelling formalism and represents a typical set of operational scales that a model may be expected to represent. Experiments in both homogeneous and heterogeneous landscapes suggest two specific issues with the specific implementation of aggregation/abstraction. The method does not exhibit the same bias caused by a regular lattice [13] but there is an apparent degradation in the approximation to isotropy in homogeneous experiments where the highest density of points is too coarse. In simpler terms, a minimum density of points is required to maintain isotropic propagation. An issue apparent in the heterogeneous experiments is that thin corridors through which weeds may spread are sensitive to maximum densities; a model structure must represent the thinnest corridor through which a weed may propagate. It is therefore important to accurately capture the grain and extent of the operational scale — for example, some fauna involved in the dispersal may ignore thin corridors, whilst other fauna may actively use thin corridors. By combining different forms of data (such as vegetation surveys and land tenure information), the model captures the effects of a range of dispersal mechanisms in the same simulation, predicting the combined effects of human and

768

A.G. Dunn and J.D. Majer

Fig. 3. The extent of cells depicted for three intermediate time steps and two levels of abstraction in the hierarchy. Elapsed time increases from left to right, the three sub-figures above are for the satellite imagery level of abstraction and the bottom three sub-figures are for the land tenure data level of abstraction. The increased rate of spread along the periphery is apparent in the top sub-figures and the dispersal over a disconnection is apparent in the bottom sub-figures.

natural dispersal mechanisms. A final assessment suggests that a more rigorous method for mapping from the dispersal mechanisms to the likelihood values is required. Specifically, the behaviour of the system is sensitive to the choice of both aggregation values between levels of abstraction and intralevel connection probabilities. By creating a spatial model of the ecosystem in the south-west of WA using this implementation of hierarchical patch dynamics, predictive analysis of risk may be undertaken using spatial information about government policy and community group choices. Hierarchical patch dynamics is used as a model of multiscale ecological heterogeneity and the simulation presented here demonstrates a rigorous implementation of hierarchical patch dynamics. The fundamental approach to combining a variety of levels of abstraction (for human observation as well as organism perception) is the key advantage of using this approach over existing approaches. The results of simulations suggest that the methodology is useful in creating a physically realistic (and therefore useful) model of the uncertain phenomena of weed propagation, but it is also suggestive of the need for introducing stronger links to a priori information about individual dispersal mechanisms in practical

Simulating Weed Propagation

769

solutions to the issue of environmental weeds, rather than adopting a purely empirical approach. Acknowledgments. The authors acknowledge the Alcoa Foundation’s Sustainability & Conservation Fellowship Program for funding the postdoctoral fellowship of the senior author (http://strongercommunities.curtin.edu.au) and two anonymous reviewers for helpful comments.

References 1. Myers, N., Mittermeier, R.A., Mittermeier, C.G., Fonseca, G.A.B.d., Kent, J.: Biodiversity hotspots for conservation priorities. Nature 403 (2000) 853–858 2. Ellis, A., Sutton, D., Knight, J., eds.: State of the Environment Report Western Australia draft 2006. Environmental Protection Authority (2006) 3. Anon.: Gondwana Link Website (2006) [online] http://www.gondwanalink.org, last accessed 05/10/2006. 4. Bradbury, R.H., Green, D.G., Snoad, N.: Are ecosystems complex systems? In Bossomaier, T.R.G., Green, D.G., eds.: Complex Systems. Cambridge University Press, Cambridge (2000) 339–365 5. van Groenendael, J.M.: Patchy distribution of weeds and some implications for modelling population dynamics: a short literature review. Weed Research 28 (1988) 437–441 6. Green, D., Klomp, N., Rimmington, G., Sadedin, S.: Complexity in Landscape Ecology, Landscape Series. Volume 4. Springer (2006) 7. Greig-Smith, P.: Pattern in vegetation. Journal of Ecology 67 (1979) 755–779 8. Nathan, R., Perry, G., Cronin, J.T., Strand, A.E., Cain, M.L.: Methods for estimating long-distance dispersal. Oikos 103 (2003) 261–273 9. Wu, J.: Effects of changing scale on landscape pattern analysis: scaling relations. Landscape Ecology 19 (2004) 125 – 138 10. Wu, J.: From balance-of-nature to hierarchical patch dynamics: a paradigm shift in ecology. Q. Rev. Biol. 70 (1995) 439–466 11. Wu, J., David, J.L.: A spatially explicit hierarchical approach to modeling complex ecological systems: theory and applications. Ecological Modelling 153 (2002) 7–26 12. Chopard, B., Droz, M.: Cellular Automata Modeling of Physical Systems. Monographs and Texts in Statistical Physics. Cambridge University Press (1998) 13. O’Regan, W., Kourtz, P., Nozaki, S.: Bias in the contagion analog to fire spread. Forest Science 22 (1976) 61–68 14. Sch¨ onfisch, B.: Anisotropy in cellular automata. BioSystems 41 (1997) 29–41 15. Higgins, S.I., Richardson, D.M.: Predicting plant migration rates in a changing world: The role of long-distance dispersal. The American Naturalist 153(5) (1999) 464–475 16. Okabe, A., Boots, B., Sugihara, K.: Spatial Tessellations — Concepts and Applications of Voronoi Diagrams. John Wiley & Sons (1992) 17. Buckley, Y.M., Anderson, S., Catterall, C.P., Corlett, R.T., Engel, T., Gosper, C.R., Nathan, R., Richardson, D.M., Setter, M., Spiegel, O., Vivian-Smith, G., Voigt, F.A., Weir, J.E.S., Westcott, D.A.: Management of plant invasions mediated by frugivore interactions. Journal of Applied Ecology 43 (2006) 848–857

A Multiscale, Cell-Based Framework for Modeling Cancer Development Yi Jiang Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA [email protected] Abstract. We use a systems approach to develop a predictive model that medical researchers can use to study and treat cancerous tumors. Our multiscale, cell-based model includes intracellular regulations, cellular level dynamics and intercellular interactions, and extracellular chemical dynamics. The intracellular level protein regulations and signaling pathways are described by Boolean networks. The cellular level growth and division dynamics, cellular adhesion, and interaction with the extracellular matrix are described by a lattice Monte Carlo model. The extracellular dynamics of the chemicals follow a system of reactiondiffusion equations. All three levels of the model are integrated into a high-performance simulation tool. Our simulation results reproduce experimental data in both avascular tumors and tumor angiogenesis. This model will enable medical researchers to gain a deeper understanding of the cellular and molecular interactions associated with cancer progression and treatment.

1

Introduction

Since 2002, cancer has become the leading cause of death for Americans between the ages of 40 and 74 [1]. But the overall effectiveness of cancer therapeutic treatments is only 50%. Understanding the tumor biology and developing a prognostic tool could therefore have immediate impact on the lives of millions of people diagnosed with cancer. There is growing recognition that achieving an integrative understanding of molecules, cells, tissues and organs is the next major frontier of biomedical science. Because of the inherent complexity of real biological systems, the development and analysis of computational models based directly on experimental data is necessary to achieve this understanding. Our model aims to capture knowledge through the explicit representation of dynamic biochemical and biophysical processes of tumor development in a multiscale framework. Tumor development is very complex and dynamic. Primary malignant tumors arise from small nodes of cells that have lost, or ceased to respond to, normal growth regulatory mechanisms, through mutations and/or altered gene expression [2]. This genetic instability causes continued malignant alterations, resulting in a biologically complex tumor. However, all tumors start from a relatively simpler, avascular stage of growth, with nutrient supply by diffusion from the surrounding tissue. The restricted supply of critical nutrients, such as oxygen and glucose, results in marked gradients within the cell mass. The tumor Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 770–777, 2007. c Springer-Verlag Berlin Heidelberg 2007 

A Multiscale, Cell-Based Framework for Modeling Cancer Development

771

cells respond both through induced alterations in physiology and metabolism, and through altered gene and protein expression [3], leading to the secretion of a wide variety of angiogenic factors. Angiogenesis, formation of new blood vessels from existing blood vessels, is necessary for subsequent tumor expansion. Angiogenic growth factors generated by tumor cells diffuse into the nearby tissue and bind to specific receptors on the endothelial cells of nearby pre-existing blood vessels. The endothelial cells become activated; they proliferate and migrate towards the tumor, generating blood vessel tubes that connect to form blood vessel loops that can circulate blood. With the new supply system, the tumor will renew growth at a much faster rate. Cells can invade the surrounding tissue and use their new blood supply as highways to travel to other parts of the body. Members of the vascular endothelial growth factor (VEGF) family are known to have a predominant role in angiogenesis. The desire to understand tumor biology has given rise to mathematical models to describe the tumor development. But no mathematical models of tumor growth yet can start from a single cell and undergo the whole process of tumor development. The state of the art in this effort employs hybrid approaches: Cristini et al. simulated the transition from avasular tumor growth to angiogenesis to vascular and invasive growth using an adaptive finite-element method coupled to a level-set method [4]; and Alarcon et al. used a hybrid of cellular automata and continuous equations for vascular tumor growth [5,6]. We have developed a multiscale, cell-based model of tumor growth and angiogenesis [7,8]. This paper aims to review and promote this model framework. The Model section describes our model at three levels. The Parallelization section describes the hybrid scheme that makes the model a high-performance simulation tool. The Results section shows that the model reproduces quantitatively experimental measurements in tumor spheroids, and qualitatively experimental observations in tumor angiogenesis. We conclude by commenting on the broad applicability of this cell-based, multiscale modeling framework.

2

Model

Our model consists of three levels. At the intracellular level, a simple protein regulatory network controls cell cycle. At the cellular level, a Monte Carlo model describes cell growth, death, cell cycle arrest, and cell-cell adhesion. At the extracellular, a set of reaction-diffusion equations describes for chemical dynamics. The three levels are closely integrated. The details of the avascular tumor model has been described in [7]. The passage of a cell through its cell cycle is controlled by a series of proteins. Since experiments suggest that more than 85% of the quiescent cells are arrested in the G1 phase [11], in our model, the cells in their G1 phase have the highest probability of becoming quiescent. We model this cell-cycle control through a simplified protein regulatory network [12], which controls the transition between G1 and S phases [7]. We model these proteins as on or off. By default this

772

Y. Jiang

Boolean network allows the cell transition to S phase. However, concentrations of growth and inhibitory factors directly influence the protein expression. If the outcome of this Boolean regulatory network is zero, the cell undergoes cell-cycle arrest, or turns quiescent. When a cell turns quiescent, it reduces its metabolism and stops its growth. The cellular model is based on the Cellular Potts Model (CPM) [9,10]. The CPM adopts phenomenological simplification of cell representation and cell interactions. CPM cells are spatially extended with complex shapes and without internal structure, as domains on the lattice with specific cell ID numbers. Most cell behaviors and interactions are in terms of effective energies and elastic constraints. Cells evolve continuously to minimize the effective energy; the CPM employs a modified Metropolis Monte-Carlo algorithm, which chooses update sites randomly and accepts them with a Boltzmann probability. The total energy of the tumor cell system includes an interfacial energy between the cells that describes the cell-type dependent adhesion energy, a volume constraint that keeps the cell volume to a value that it ”intends” to maintain; and an effective chemical energy that describes the cell’s ability to migrate up or down the gradient of the chemical concentration. When the cell’s clock reaches the cell cycle duration and the cell volume reaches the target volume, the cell will divide. The daughter cells inherit all properties of their parent with a probability for mutation. Cells also interact with their environment, which is characterized by local concentrations of biochemicals. We consider two types of chemicals: the metabolites and the chemoattractants. The former includes nutrients (oxygen and glucose), metabolic waste (lactate), and growth factors and inhibitors that cells secret and uptake. The latter corresponds to the chemotactic signaling molecules. The chemicals follow the reaction-diffusion dynamics: ∂Mi = di ∇2 Mi − ai . ∂t ∂Ci = Di ∇2 Ci − γCi + B. ∂t

(1) (2)

Here the metabolite (concentration M ) diffuses with the diffusion constant d and is consumed (or produced) at a constant rate a. The chemoattractant, i.e. VEGF secreted by tumor cells to activate endothelial cells, diffuses with diffusion constant D and decays at a rate γ. The local uptake function B describes that an endothelial cell could bind as much as VEGF is available until its surface receptors saturate. Both the metabolic rates and the uptake function B are functions of time and space. By assuming that (1) inside the tumor the diffusion coefficients are constant; and (2) each cell is chemically homogeneous, while different cells might have different chemical concentrations, we can solve the equations on a much coarser lattice than the lattice for CPM. We use parameters derived from multicellular tumor spheroid experiments, the primary in vitro experimental model of avascular tumors [2]. Like tumors in vivo, a typical spheroid consists of a necrotic core surrounded by layers of quiescent and proliferating tumor cells, as shown in Fig. 2(c). It is critical to emphasize that

A Multiscale, Cell-Based Framework for Modeling Cancer Development

773

this multicellular tumor spheroid experimental model recapitulates all major characteristics of the growth, composition, microenvironment, and therapeutic response of solid tumors in humans [2]. To model tumor angiogenesis, the simulation domain now corresponds to the stroma between the existing blood vessel and an avascular tumor. The tumor is a constant source for VEGF, whose dynamics follow Eqn. (3) and establish a concentration gradient across the stroma. Each endothelial cell becomes activated when the local VEGF concentration exceeds a threshold value. The activated vascular endothelial cells not only increase proliferation, decrease apoptosis rate, but also migrate towards the higher concentration of the VEGF signal. The stroma consists of normal tissue cells and extracellular matrix (ECM). We model the ECM explicitly as a matrix of fibers. They are a special ”cell” with a semi-rigid volume. The interstitial fluid that fills the space amongst the normal tissue cells and the fibers is more deformable than the fibers. When compressed by the growing endothelial sprout, the normal cells can undergo apoptosis and become part of the interstitial fluid. The endothelial cells can modify the local fibronectin concentration and re-organizing the fiber structure, as well as migrate on the matrix through haptotaxis, or follow the gradient of adhesion from the gradient of fibronectin density.

3

Parallelization

The underlying structure for the cell system is a 3D lattice. As all operations in the CPM are local, domain decomposition method is a natural scheme for parallelizing the CPM. We divide the physical domain into subdomains, one for each processor. Each processor then model the subsystem, storing only the data associated with the subdomain, and exchanging knowledge zone information with processors handling its neighboring subdomains. We adopt 1D domain decomposition based on two considerations. First, it will decrease the communication overhead when the knowledge zones are more continuous. Second, this simple decomposition allows us to store the cell information in two layers: the lattice layer and the cell information layer. As each cell occupies many lattice sites (over 100), this two-layer information structure far more efficient than storing all cell information on the lattice. In the Monte Carlo update, to avoid expensive remote memory access, we use openMP, a parallel approach suited for shared memory processors. Before the Monte Carlo update of the lattice, the Master Node gathers subdomain lattice information and cell information from slave nodes. The Master Node performs Monte Carlo operations parallelly using openMP, and distributes subdomain data to corresponding slave nodes for next operation (Fig. 1) [13]. We solve the reaction-diffusion equations that govern the chemical dynamics on a chemical lattice that is coarser than the cell lattice. This lattice is similarly decomposed and the equations solved in parallel within each subdomain. To accelerate long time simulations, we use implicit PDE sovling schemes based on BoxMG, an MPI-based multigrid PDE solver [14].

774

Y. Jiang

MPI based data transfering

Slave Node 1

Local data

OpenMP based parallelization

Master Node

Slave Node 2

Slave Node 3

Fig. 1. OpenMP based parallelization in Monte Carlo operation

4

Result

In our simulations, a single tumor cell evolves into a layered structure consisting of concentric spheres of proliferating and quiescent cells at the surface and the intermediate layer respectively, and a necrotic core at the center of the spheroid, reproducing the experimental structure (Fig. 2). Fig. 3 shows the comparison between the growth curves of a simulated solid tumor and two sets of spheroid experimental data. With 0.08 mM oxygen and 5.5 mM glucose kept constant in the medium, the number of cells (Fig. 3a) and the tumor volume (Fig. 3b) first grow exponentially in time for about 5–7 days.

(a)

(b)

(c)

Fig. 2. Snapshots of a simulated solid tumor at 10 days (a), and 18 days (b) of growth from a single cell. Blue, cyan, yellow and crimson encode cell medium, proliferating, quiescent, and necrotic cells. (c) A histological crosssection of a spheroid of mouse mammary cell line, EMT6/R0.

A Multiscale, Cell-Based Framework for Modeling Cancer Development

775

The growth then slows down, coinciding with the appearance of quiescent cells. In both the experiments [15,16] and simulations, spheroid growth saturates after around 28–30 days. We fit both the experimental and the simulation data to a Gompertz function, in order to objectively estimate the initial doubling times and the spheroid saturation sizes [3]. The doubling times for cell volume in experiments and simulations differ by a factor of two, over almost 5 orders of magnitude. The agreement was very good. 10

5

9

10

8

3

Volume [μm ]

Number of Cells

10

10

3

10

10

Simulation

10

Experiment #1 Experiment #2

0

10

5

10

15

Time [days]

20

25

10

(a)

7

6

Simulation

5

Experiment #1 Experiment #2

4

5

10

15

Time [days]

20

25

(b)

Fig. 3. The growth curves of a spheroid with 0.08 mM O2 and 5.5 mM glucose in the medium: (a) the number of cells and (b) the volume of spheroid in time. The solid symbols are experimental data for EMT6/Ro[15,16]; the circles are simulation data. The solid lines are the best fit with a Gompertz function for experimental data.

In order to test the robustness of our model, we kept all the parameters in the model fixed at the values determined to produce the best fit to the growth of spheroids in 0.08 mM oxygen and 5.5 mM glucose. We then varied only the nutrient concentrations in the medium, as was done in spheroid experiments. Our simulations still showed good agreement between simulation and experimental growth curves when the external conditions were changed to 0.28 mM O2 and 16.5 mM glucose in the medium [7]. In fitting our model to the experimental data, we predicted a set of conditions for the cell to undergo necrosis and the diffusion coefficients for the growth promoters and inhibitors to be in the order of 10−7 and 10−6 cm2 /hr, respectively. These predictions can be tested experimentally. In tumor angiogenesis, Fig. 4 shows that our model is able to capture realistic cell dynamics and capillary morphologies, such as preferential sprout migration along matrix fibers and cell elongation, and more complex events, such as sprout branching and fusion, or anastomosis [8], that occur during angiogenesis. Our model constitutes the first cell-based model of tumor-induced angiogenesis with the realistic cell-cell and cell-matrix intereactions. This model can be employed as a simulation tool for investigating mechanisms, testing existing and formulating new research hypotheses. For example, we showed that freely diffusing VEGF would result in broad and swollen sprouts, while matrix bound VEGF typically generates thin sprouts [8], supporting the recent experimental interpretations.

776

Y. Jiang

Fig. 4. Tumor angiogenesis: a typical simulated vessel sprout. The left edge of the simulation domain is the blood vessel wall, while the right edge is the source of VEGF. Endothelial (red) cells grow and migrate in the stroma consists of normal cells (blue squares), matrix fibers (yellow) and interstitial fluids (green).

5

Discussion and Outlook

This multiscale approach treats cells as the fundamental unit of cancer development. We will further develop the model and investigate the growth of vessels into and inside tumor, as well as tumor growth and invasion. With this framework, we will model the development of cancer from beginning to full metastasis. We will also be able to test the effects of drugs and therapeutic strategies. Combined with the extant data (e.g. in vitro spheroid data and in vivo angiogenesis data), this type of model will help construct anatomically accurate models of a tumor and its vascular system. If successfully implemented, the model can guide experimental design and interpretation. Continuously revised by new information, the final model could potentially enable us to assess tumor susceptibility to multiple therapeutic interventions, improve understanding of tumor biology, better predict and prevent tumor metastasis, and ultimately increase patient survival. Furthermore, most biomedical problems involves systems level interactions. Genome, or molecular, or single cell studies cannot possibly provide systems level behaviors. This cell-based, multiscale modeling framework is applicable to a number of problems, e.g. biofilm formation, organogenesis, where cell-cell and cell-environment interactions dictate the collective behavior.

Acknowledgments This work was supported by the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396.

A Multiscale, Cell-Based Framework for Modeling Cancer Development

777

References 1. Jemal, A.:The Journal of the American Medical Association, 294 (2005) 1255– 1259. 2. Sutherland, R.M. Cell and environment interactions in tumor microregions: the multicell spheroid model. Science 240 (1988) 177–184. 3. Marusic, M., Bajzer Z, Freyer J.P., and Vuk-Pavlovic S: Analysis of growth of multicellular tumour spheroid by mathematical models. Cell Prolif. 27 (1994) 73. 4. Zheng, X., Wise S.M., and Cristini V.: Nonlinear simulation of tumor necrosis, neovascularization and tissue invasion via an adaptive finite-element/level-set method, Bull. Math. Biol. 67 (2005) 211-259 5. Alarcon, T., Byrne H.M., and Maini P.K.: A Multiple Scale Model for Tumor Growth, Multiscale Modeling and Simulation. 3 (2004) 440–475. 6. Alarcon, T., Byrne H.M., and Maini P.K.: Towards whole-organ modelling of tumour growth, Progress in Biophysics & Molecular Biology. 85 (2004) 451 – 472. 7. Jiang, Y., Pjesivac J., Cantrell C, and Freyer J.P.: A multiscale model for avascular tumor growth, Biophys. J. 89 (2005) 3873–3883 . 8. Bauer, A.L., Jackson T.L., and Jiang Y.: A cell-based model exhibiting branching and anastomosis during tumor-induced angiogenesis. Biophys. J. 92 (2007) in press. 9. Glazier, J.A. and Garner F: Simulation of the differential adhesion driven rearrangement of biological cells. Phys. Rev. E 47 (1993): 2128-2154. 10. Jiang, Y., Levine H. and Glazier J.A.: Differential adhesion and chemotaxis in mound formation of Dictyostelium, Biophys. J. 75(1998) 2615 –2625. 11. LaRue, K.E., Kahlil M., and Freyer J.P.: Microenvironmental regulation of proliferation in EMT6 multicellular spheroids is mediated through differential expression of cycline-dependent kinase inhibitors. Cancer Res. 64(2004) 1621–1631. 12. Data from the Kyoto Encyclopedia of Genes and Genomes (kegg.com). 13. He, K., Dong S., and Jiang Y.: Parallel Cellular Potts Model (2007) in preparation. 14. Austin T., Berndt M., et al. Parallel, Scalable, and Robust Multigrid on Structured Grids. Los Alamos Research Report (2003): LA-UR-03-9167. 15. Freyer, J.P. and Sutherland R.M.: Regulation of growth saturation and development of necrosis in EMT6/R0 multicellular spheroids by the glucose and oxygen supply. Cancer Res. 46(1986) 3504-3512. 16. Freyer, J.P. and R.M. Sutherland. Proliferative and clonogenic heterogeneity of cells from EMT6/Ro multicellular spheroids induced by the glucose and oxygen supply. Cancer Res. 46 (1986) 3513-3520.

Stochastic Modelling and Simulation of Coupled Autoregulated Oscillators in a Multicellular Environment: The her1/her7 Genes Andr´e Leier, Kevin Burrage, and Pamela Burrage Advanced Computational Modelling Centre, University of Queensland, Brisbane, QLD 4072 Australia {leier,kb,pmb}@acmc.uq.edu.au

Abstract. Delays are an important feature in temporal models of genetic regulation due to slow biochemical processes such as transcription and translation. In this paper we show how to model intrinsic noise effects in a delayed setting. As a particular application we apply these ideas to modelling somite segmentation in zebrafish across a number of cells in which two linked oscillatory genes her1 and her7 are synchronized via Notch signalling between the cells. Keywords: delay stochastic simulation algorithm, coupled regulatory systems, multicellular environment, multiscale modelling.

1

Introduction

Temporal models of genetic regulatory networks have to take account of time delays that are associated with transcription, translation and nuclear and cytoplasmic translocations in order to allow for more reliable predictions [1]. An important aspect of modelling biochemical reaction systems is intrinsic noise that is due to the uncertainty of knowing when a reaction occurs and which reaction it is. When modelling intrinsic noise we can identify three modelling regimes. The first regime corresponds to the case where there are small numbers of molecules in the system so that intrinsic noise effects dominate. In this regime the Stochastic Simulation Algorithm (SSA) [2] is the method of choice and it describes the evolution of a discrete nonlinear Markov process representing the number of molecules in the system. The intermediate regime is called the Langevin regime and here the framework for modelling chemical kinetics is that of a system of Itˆo stochastic differential equations. In this regime the numbers of molecules are such that we can talk about concentrations rather than individual numbers of molecules but the intrinsic noise effects are still significant. The final regime is the deterministic regime where there are large numbers of molecules for each species. This regime is given by the standard chemical kinetic rate equations that are described by ordinary differential equations. In some sense this third regime represents the mean behaviour of the kinetics in the other two regimes. It is vital to model the chemical kinetics of a system in the most appropriate regime otherwise the dynamics may be poorly represented. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 778–785, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Stochastic Modelling and Simulation of Coupled Autoregulated Oscillators

779

In order to take proper account of both intrinsic randomness and time delays, we have developed the delay stochastic simulation algorithm (DSSA) [3]. This algorithm very naturally generalises the SSA in a delayed setting. Transcriptional and translational time delays are known to drive genetic oscillators. There are many types of molecular clocks that regulate biological processes but apart from circadian clocks [4] these clocks are still relatively poorly characterised. Oscillatory dynamics are also observed for Notch signalling molecules such as Hes1 and Her1/Her7. The hes1 gene and the two linked genes her1 and her7 are known to play key roles as molecular clocks during somite segmentation in mouse and zebrafish, respectively. In zebrafish the genes her1 and her7 are autorepressed by their own gene products and positively regulated by Notch signalling that leads to oscillatory gene expression with a period of about 30 min, generating regular patterns of somites (future segments of the vertebrate) [5]. In both cases the transcriptional and translational delays are responsible for the oscillatory behaviour. In a recent set of experiments Hirata et al. [6] measured the production of hes1 mRNA and Hes1 protein in mouse. They measured a regular two hour cycle with a phase lag of approximately 15 minutes between the oscillatory profiles of mRNA and protein. The oscillations are not dependent on the stimulus but can be induced by exposure to cells expressing delta. This work led to a number of modelling approaches using the framework of Delay Differential Equations (DDEs) [1,7]. However, in a more recent work Barrio et al. used a discrete delay simulation algorithm that took into account intrinsic noise and transcriptional and translational delays to show that the Hes1 system was robust to intrinsic noise but that the oscillatory dynamics crucially depended on the size of the transcriptional and translational delays. In a similar setting Lewis [5] and Giudicelli and Lewis [8] have studied the nature of somite segmentation in zebrafish. In zebrafish it is well known that two linked oscillating genes her1/her7 code for inhibitory gene regulatory proteins that are implicated in the pattern of somites at the tail end of the zebrafish embryo. The genes her1 and her7 code for autoinhibitory transcription factors Her1 and Her7 (see Fig. 1). The direct autoinhibition causes oscillations in mRNA and protein concentrations with a period determined by the transcriptional and translational delays. Horikawa et al. [9] have performed a series of experiments in which they investigate the system-level properties of the segmentation clock in zebrafish. Their main conclusion is that the segmentation clock behaves as a coupled oscillator. The key element is the Notch-dependent intercellular communication which itself is regulated by the internal hairy oscillator and whose coupling of neighbouring cells synchronises the oscillations. In one particular experiment they replaced some coupled cells by cells that were out of phase with the remaining cells but showed that at a later stage they still became fully synchronised. Clearly the intercellular coupling plays a crucial role in minimising the effects of noise to maintain coherent oscillations.

780

A. Leier, K. Burrage, and P. Burrage 1-dimensional cell array

Notch

delta

DeltaC

Her1/7 DeltaC her1/7

Fig. 1. Diagram showing the inter- and intracellular Delta-Notch signalling pathway and the autoinhibition of her1 and her7 genes. DeltaC proteins in the neighboring cells activate the Notch signal within the cell.

Both Lewis and Horikawa have used a stochastic model to understand the above effects. But this model is very different from our approach. The Lewis model for a single cell and two coupled cells is generalised by Horikawa et al. to a one-dimensional array of cells. In both approaches they essentially couple a delay differential equation with noise associated with the uncertainty of proteins binding to the operator sites on the DNA. In our case we are rigorously applying the effects of intrinsic noise, in a delayed setting, at all stages of the chemical kinetics. We also note that this is the first stage in developing a truly multi-scaled approach to understanding the effects of delays in a multi-celled environment. Such a multi-scaled model will require us to couple together delay models in the discrete, stochastic and deterministic regimes - see, for example, attempts to do this in Burrage et al. [10]. Section 2 gives a brief description of our DSSA implementation along with a mathematical description of the coupled Her1/Her7 Delta-Notch system for a linear chain of cells. Section 3 presents the numerical results and the paper concludes with discussion on the significance of our approach.

2

Methods

The SSA describes the evolution of a discrete stochastic chemical kinetic process in a well stirred mixture. Thus assume that there are m reactions between N  chemical species, and let X(t) = (X1 (t), · · · , XN (t)) be the vector of chemical species where Xi (t) is the number of species i at time t. The chemical kinetics is uniquely characterised by the m stoichiometric vectors ν1 , · · · , νm and the propensity functions a1 (X), · · · , am (X) that represent the unscaled probabilities of the reactions to occur. The underlying idea behind the SSA is that at each time step t a step size θ is determined from an exponential waiting time distribution such that at most one reaction can occur in the time interval (t, t+θ). If the most likely reaction, as determined from the relative sizes of the propensity functions, is reaction j say, then the state vector is updated as X(t + θ) = X(t) + νj .

Stochastic Modelling and Simulation of Coupled Autoregulated Oscillators

781

Algorithm 1. DSSA Data: reactions defined by reactant and product vectors, consuming delayed reactions are marked, stoichiometry, reaction rates, initial state X(0), simulation time T , delays Result: state dynamics begin while t < T do



generate U1 and U2 as U (0, 1) random variables a0 (X(t)) = θ=

1 a0 (X(t))



m j=1

aj (X(t))

ln(1/U1 )

select j such that j−1 k=1

ak (X(t)) < U2 a0 (X(t)) ≤



j k=1

ak (X(t))

if delayed reactions are scheduled within (t, t + θ] then let k be the delayed reaction scheduled next at time t + τ if k is a consuming delayed reaction then X(t + τ ) = X(t) + νkp (update products only) else X(t + τ ) = X(t) + νk t=t+τ else if j is not a delayed reaction then X(t + θ) = X(t) + νj else record time t + θ + τj for delayed reaction j with delay τj if j is a consuming delayed reaction then X(t + θ) = X(t) + νjs (update reactants) t=t+θ end

In a delayed setting, the SSA loses its Markov property and concurrent events become an issue as non-delayed instantaneous reactions occur while delayed reactions wait to be updated. In our implementation [3] (see Algorithm 1), the DSSA proceeds as the SSA as long as there are no delayed reactions scheduled in the next time step. Otherwise, it ignores the waiting time and the reaction that should be updated beyond the current update point and moves to the scheduled delayed reaction. Furthermore, in order to avoid the possibility of obtaining negative molecular numbers, reactants and products of delayed consuming reactions must be updated separately, namely when the delayed reaction is selected and when it is completed, respectively. Our model is based on the chemical reaction models of Lewis and Horikawa et al. but our implementation is entirely different as intrinsic noise is represented

782

A. Leier, K. Burrage, and P. Burrage Table 1. Model parameters used for DDE and DSSA. Parameter values [5].

Parameter bh1 , bh7 bd ch1 , ch7 cd ah1 , ah7 ad kh1 , kh7 kd P0 D0 τh1m , τh7m τh1p , τh7p τdm τdp

Description Rate constant Her1/Her7 protein degradation rate 0.23 min−1 DeltaC protein degradation rate 0.23 min−1 her1/her7 mRNA degradation rate 0.23 min−1 deltaC mRNA degradation rate 0.23 min−1 Her1/Her7 protein synthesis rate (max.) 4.5 min−1 DeltaC protein synthesis rate (max.) 4.5 min−1 her1/her7 mRNA synthesis rate (max.) 33 min−1 deltaC mRNA synthesis rate (max.) 33 min−1 critical no. of Her1 + Her7 proteins/cell 40 critical no. of Delta proteins/cell 1000 time to produce a single her1/her7 mRNA molecule 12.0, 7.1 min time to produce a single Her1/Her7 protein 2.8, 1.7 min time to produce a single deltaC mRNA molecule 16.0 min time to produce a single DeltaC protein 20.5 min

correctly for each reaction. In the initial state the number of molecules for each species is set to zero. For the 5-cell model we get 30 different species and a set of 60 reactions. The corresponding rate constants are listed in Table 1. Denote by Mh1 , Mh7 , Md , Ph1 , Ph7 and Pd the six species her1 mRNA, her7 mRNA, deltaC mRNA, Her1 protein, Her7 protein and deltaC protein in a particular cell i. For each cell we have 6 (non-delayed) degradations {Mh1 , Mh7 , Md , Ph1 , Ph7 , Pd } −→ 0 with reaction rate constants ch1 , ch7 , cd , bh1 , bh7 , and bd , respectively, and propensities aR1 = ch1 Mh1 , aR2 = ch7 Mh7 , aR3 = cd Md , aR4 = bh1 Ph1 , aR5 = bh7 Ph7 , and aR6 = bd Pd . The three translation reactions with delays τh1p , τh7p , and τdp are {Mh1 , Mh7 , Md } −→ {Mh1 + Ph1 , Mh7 + Ph7 , Md + Pd } with reaction rate constants ah1 , ah7 and ad and propensities aR7 = ah1 Mh1 , aR8 = ah7 Mh7 , and aR9 = ad Md . The three regulated transcription reactions with delays τh1m , τh7m , and τdm are {Ph1 , Ph7 , Pd } −→ {Mh1 + Ph1 , Mh7 + Ph7 , Md + Pd } with reaction rate constants kh1 , kh7 , and kd and corresponding propensities aR10 = kh1 f (Ph1 , Ph7 , P˜D ), aR11 = kh7 f (Ph1 , Ph7 , P˜D ), and aR12 = kd g(Ph1 , Ph7 ). For cells 2 to 4 the Hill function f is defined by f (Ph1 , Ph7 , P˜D ) = rh

P˜D /D0 1 1 + rhd 2 2 1 + (Ph1 Ph7 )/P0 1 + (Ph1 Ph7 )/P0 1 + P˜D /D0

Stochastic Modelling and Simulation of Coupled Autoregulated Oscillators

783

n1 n2 with P˜D = (PD +PD )/2 (the average number of PD for the two neighboring cells n1 and n2). The parameters rh and rhd are weight parameters that determine the balance of internal and external contribution of oscillating molecules. With rh + rhd = 1 the coupling strength rhd /rh can be defined. In our experiments we set rhd = 1, that is, the coupling is 100% combinatorial. In accordance with the Horikawa model we used the Hill functions

f (Ph1 , Ph7 , PD ) =

PD /D0 1 , 1 + (Ph1 Ph7 )/P02 1 + PD /D0

f (Ph1 , Ph7 , PD ) =

500/D0 1 2 1 + (Ph1 Ph7 )/P0 1 + 500/D0

for cell 1 and 5, respectively. The Hill function g is given by g(Ph1 , Ph7 ) = 1 . 1+(Ph1 Ph7 )/P02 The single cell, single-gene model consists only of 2 species (her1 mRNA and Her1 protein) and 4 reactions. The two degradation and the single translation reactions correspond to those in the 5-cell model. For the inhibitory regulation of transcription we assume a Hill function with Hill coefficient 2 (Ph1 acts as a dimer). The Hill function takes the form f (Ph1 ) = 1/(1 + (Ph1 /P0 )2 ).

3

Results and Discussion

In this section we present individual simulations of a system of 5 coupled cells, so that the dimension of the system is 30, in both the DSSA and DDE cases. Figure 2 (a,b) shows the dynamics for a single cell. In the DDE case after an initial overshoot, the amplitudes are completely regular and the oscillatory period is approximately 40 minutes. In the intrinsic noise case there are still sustained oscillations but there is some irregularity in the profiles and the oscillatory period is closer to 50 minutes. The time lag (5-7 min) between protein and mRNA is about the same in both cases. In Fig. 2 (c,d) we present DSSA simulations of the 5 coupled cells and give the profiles for mRNA and protein at deltaC and her1 for cell 3. Now the period of oscillation is closer to 45 minutes and the lag between protein and mRNA is about 25 minutes for deltaC and about 7 minutes for her1. Thus we see that the coupling has some effect on the period of oscillation. In Fig. 3 we mimic an experiment by Horikawa et al. In both the DDE and the DSSA setting we disturb cell 3 after a certain time period (500 minutes in the DSSA case and 260 minutes in the DDE case). This is done by resetting all the values for cell 3 to zero at this point. This is meant to represent the experiment of Horikawa et al. in which some of the cells are replaced by oscillating cells that are out of phase. They then observed that nearly all the cells become resynchronized after three oscillations (90 min.). Interestingly, in the DDE setting it only takes about 60 minutes for the onset of resynchronization while in the DSSA setting it takes about 180 minutes. The difference can be partly due to the larger number of cells that are experimentally transplanted.

784

A. Leier, K. Burrage, and P. Burrage DSSA: single cell

Her1 mRNA Her1 Protein (x 0.05)

100

50

0

0

100

200 300 Time (min)

400

Scaled number of molecules

Scaled number of molecules

DDE: single cell 150

Her1 mRNA Her1 protein (x 0.05)

150

100

50

0

500

0

100

(a)

100

50

200 Time (min)

500

DSSA: dynamics of cell #3 (out of 5 interacting cells) with 100% combinatorial coupling

deltaC mRNA DeltaC protein (x 0.05)

250

300

Scaled numbers of molecules

Scaled numbers of molecules

DSSA: dynamics of cell #3 (out of 5 interacting cells) with 100% combinatorial coupling

150

400

(b)

150

0 100

200 300 Time (min)

80 Her1 mRNA Her1 protein (x 0.05)

60 40 20 0 100

150

(c)

200 Time (min)

250

300

(d)

DSSA: dynamics in cell #3 with 100% combinatorial coupling (non−disturbed) 150 her1 mRNA delta mRNA 100

50

0

0

100

200 300 Time (min)

400

Numbers of molecules

Numbers of molecules

Fig. 2. (a) DDE solution and (b) single DSSA run for the Her1/Her7 single cell model. (c,d) DSSA simulation of five Delta-Notch coupled cells, showing the dynamics of deltaC mRNA and protein and her1 mRNA and protein in cell three.

DSSA: dynamics in cell #3 with 100% combinatorial coupling (disturbed) 150 her1 mRNA delta mRNA 100

50

0 500

500

550

50

0

0

100

200 300 Time (min)

(c)

700

750

(b)

DDE: dynamics in cell #3 with 100% combinatorial coupling (non−disturbed) 150 her1 mRNA delta mRNA 100

400

500

Numbers of molecules

Numbers of molecules

(a)

600 650 Time (min)

DDE: dynamics in cell #3 with 100% combinatorial coupling (disturbed) 150 her1 mRNA delta mRNA 100

50

0 260

310

360 410 Time (min)

460

510

(d)

Fig. 3. DSSA simulation result and DDE solution for the 5-cell array in the nondisturbed and disturbed setting. The graphs show the dynamics of deltaC and her1 mRNA in cell three. (a,c) DSSA and DDE results in the non-disturbed setting, respectively. (b,d) DSSA and DDE results in the disturbed setting. Initial conditions for cell 3 are set to zero. All other initial molecular numbers stem from the non-disturbed DSSA and DDE results in (a,c) after 500 and 260 minutes, respectively.

Stochastic Modelling and Simulation of Coupled Autoregulated Oscillators

4

785

Conclusions

In this paper we have simulated Delta-Notch coupled her1/her7 oscillators for 5 cells in both the deterministic (DDE) and delayed, intrinsic noise setting (DSSA). We have shown that there are some similarities between the dynamics of both but the intrinsic noise simulations do make some predictions that are different to the deterministic model (see Fig. 3) that can be verified experimentally. Thus it is important that both intrinsic noise delayed models and continuous deterministic delay models are simulated whenever insights into genetic regulation are being gleaned. However, since the time steps in the DSSA setting can be very small, there are considerable computational overheads in modelling even a chain of 5 cells. In fact, one simulation takes about 90 minutes on a Pentium 4 PC (3.06 GHz) using MatLab 7.2. If we wish to extend these ideas to large cellular systems then we need new multiscale algorithms which will still model intrinsic noise in a delayed setting but will overcome the issues of small stepsizes. This has been considered in the non-delay case by for example Tian and Burrage [11] through their use of τ -leap methods, and similar ideas are needed in the delay setting. This is the subject of further work, along with considerations on how to combine spatial and temporal aspects when dealing with the lack of homogeneity within a cell.

References 1. Monk, N.A.M.: Oscillatory expression of hes1, p53, and nf-κb driven by transcriptional time delays. Curr Biol 13 (2003) 1409–1413 2. Gillespie, D.T.: Exact stochastic simulation of coupled chemical reactions. J Phys Chem 81 (1977) 2340–2361 3. Barrio, M., Burrage, K., Leier, A., Tian, T.: Oscillatory regulation of hes1: discrete stochastic delay modelling and simulation. PLoS Comput Biol 2 (2006) e117 4. Reppert, S.M., Weaver, D.R.: Molecular analysis of mammalian circadian rhythms. Annu Rev Physiol 63 (2001) 647–676 5. Lewis, J.: Autoinhibition with transcriptional delay: a simple mechanism for the zebrafish somitogenesis oscillator. Curr Biol 13 (2003) 1398–1408 6. Hirata, H., Yoshiura, S., Ohtsuka, T., Bessho, Y., Harada, T., Yoshikawa, K., Kageyama, R.: Oscillatory expression of the bhlh factor hes1 regulated by a negative feedback loop. Science 298 (2002) 840–843 7. Jensen, M.H., Sneppen, K., Tiana, G.: Sustained oscillations and time delays in gene expression of protein hes1. FEBS Lett 541 (2003) 176–177 8. Giudicelli, F., Lewis, J.: The vertebrate segmentation clock. Curr Opin Genet Dev 14 (2004) 407–414 9. Horikawa, K., Ishimatsu, K., Yoshimoto, E., Kondo, S., Takeda, H.: Noise-resistant and synchronized oscillation of the segmentation clock. Nature 441 (2006) 719–723 10. Burrage, K., Tian, T., Burrage, P.: A multi-scaled approach for simulating chemical reaction systems. Prog Biophys Mol Biol 85 (2004) 217–234 11. Tian, T., Burrage, K.: Binomial leap methods for simulating stochastic chemical kinetics. J Chem Phys 121 (2004) 10356–10364

Multiscale Modeling of Biopolymer Translocation Through a Nanopore Maria Fyta1 , Simone Melchionna2 , Efthimios Kaxiras1, and Sauro Succi4 Department of Physics and Division of Engineering and Applied Sciences Harvard University, Cambridge MA 02138, USA [email protected], [email protected] 2 INFM-SOFT, Department of Physics, Universit` a di Roma La Sapienza P.le A. Moro 2, 00185 Rome, Italy [email protected] Istituto Applicazioni Calcolo, CNR, Viale del Policlinico 137, 00161, Rome, Italy [email protected] 1

3

Abstract. We employ a multiscale approach to model the translocation of biopolymers through nanometer size pores. Our computational scheme combines microscopic Langevin molecular dynamics (MD) with a mesoscopic lattice Boltzmann (LB) method for the solvent dynamics, explicitly taking into account the interactions of the molecule with the surrounding fluid. Both dynamical and statistical aspects of the translocation process were investigated, by simulating polymers of various initial configurations and lengths. For a representative molecule size, we explore the effects of important parameters that enter in the simulation, paying particular attention to the strength of the molecule-solvent coupling and of the external electric field which drives the translocation process. Finally, we explore the connection between the generic polymers modeled in the simulation and DNA, for which interesting recent experimental results are available.

1

Introduction

Biological systems exhibit a complexity and diversity far richer than the simple solid or fluid systems traditionally studied in physics or chemistry. The powerful quantitative methods developed in the latter two disciplines to analyze the behavior of prototypical simple systems are often difficult to extend to the domain of biological systems. Advances in computer technology and breakthroughs in simulational methods have been constantly reducing the gap between quantitative models and actual biological behavior. The main challenge remains the wide and disparate range of spatio-temporal scales involved in the dynamical evolution of complex biological systems. In response to this challenge, various strategies have been developed recently, which are in general referred to as “multiscale modeling”. These methods are based on composite computational schemes in which information is exchanged between the scales. We have recently developed a multiscale framework which is well suited to address a class of biologically related problems. This method involves different Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 786–793, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Multiscale Modeling of Biopolymer Translocation Through a Nanopore

787

levels of the statistical description of matter (continuum and atomistic) and is able to handle different scales through the spatial and temporal coupling of a mesoscopic fluid solvent, using the lattice Boltzmann method [1] (LB), with the atomistic level, which employs explicit molecular dynamics (MD). The solvent dynamics does not require any form of statistical ensemble averaging as it is represented through a discrete set of pre-averaged probability distribution functions, which are propagated along straight particle trajectories. This dual field/particle nature greatly facilitates the coupling between the mesoscopic fluid and the atomistic level, which proceeds seamlessy in time and only requires standard interpolation/extrapolation for information-transfer in physical space. Full details on this scheme are reported in Ref. [2]. We must note that to the best of our knowledge, although LB and MD with Langevin dynamics have been coupled before [3], this is the first time that such a coupling is put in place for long molecules of biological interest. Motivated by recent experimental studies, we apply this multiscale approach to the translocation of a biopolymer through a narrow pore. These kind of biophysical processes are important in phenomena like viral infection by phages, inter-bacterial DNA transduction or gene therapy [4]. In addition, they are believed to open a way for ultrafast DNA-sequencing by reading the base sequence as the biopolymer passes through a nanopore. Experimentally, translocation is observed in vitro by pulling DNA molecules through micro-fabricated solid state or membrane channels under the effect of a localized electric field [5]. From a theoretical point of view, simplified schemes [6] and non-hydrodynamic coarsegrained or microscopic models [7,8] are able to analyze universal features of the translocation process. This, though, is a complex phenomenon involving the competition between many-body interactions at the atomic or molecular scale, fluid-atom hydrodynamic coupling, as well as the interaction of the biopolymer with wall molecules in the region of the pore. A quantitative description of this complex phenomenon calls for state-of-the art modeling, towards which the results presented here are directed.

2

Numerical Set-Up

In our simulations we use a three-dimensional box of size Nx × Nx /2 × Nx /2 in units of the lattice spacing Δx. The box contains both the polymer and the fluid solvent. The former is initialized via a standard self-avoiding random walk algorithm and further relaxed to equilibrium by Molecular Dynamics. The solvent is initialized with the equilibrium distribution corresponding to a constant density and zero macroscopic speed. Periodicity is imposed for both the fluid and the polymer in all directions. A separating wall is located in the mid-section of the x direction, at x/Δx = Nx /2, with a square hole of side h = 3Δx at the center, through which the polymer can translocate from one chamber to the other. For polymers with up to N = 400 beads we use Nx = 80; for larger polymers Nx = 100. At t = 0 the polymer resides entirely in the right chamber at

788

M. Fyta et al.

x/Δx > Nx /2. The polymer is advanced in time according to the following set of Molecular Dynamics-Langevin equations for the bead positions rp and velocities v p (index p runs over all beads): Mp

 dv p =− ∂rp VLJ (rp − r q ) + γ(up − v p ) + Mp ξp − λp ∂rp κp dt q

(1)

These interact among themselves through a Lennard-Jones potential with σ = 1.8 and ε = 10−4 :    σ 12  σ 6 − (2) VLJ (r) = 4ε r r This potential is augmented by an angular harmonic term to account for distortions of the angle between consecutive bonds. The second term in Eq.(1) represents the mechanical friction between a bead and the surrounding fluid, up is the fluid velocity evaluated at the bead position and γ the friction coefficient. In addition to mechanical drag, the polymer feels the effects of stochastic fluctuations of the fluid environment, through the random term, ξp . This is related to the third term in Eq.(1), which is an incorrelated random term with zero mean. Finally, the last term in Eq.(1) is the reaction force resulting from N − 1 holonomic constraints for molecules modelled with rigid covalent bonds. The bond length is set at b = 1.2 and Mp is the bead mass equal to 1. (a)

(b)

(c)

Fig. 1. Snapshots of a typical event: a polymer (N = 300) translocating from the right to the left is depicted at a time equal to (a) 0.11, (b) 0.47, and (c) 0.81 of the total time for this translocation. The vertical line in the middle of each panel shows the wall.

Translocation is induced by a constant electric force (Fdrive ) which acts along the x direction and is confined in a rectangular channel of size 3Δx × Δx × Δx along the streamwise (x direction) and cross-flow (y, z directions). The solvent density and kinematic viscosity are 1 and 0.1, respectively, and the temperature is kB T = 10−4 . All parameters are in units of the LB timestep Δt and lattice spacing Δx, which we set equal to 1. Additional details have been presented in Ref. [2]. In our simulations we use Fdrive = 0.02 and a friction coefficient γ = 0.1. It should be kept in mind that γ is a parameter governing both the structural relation of the polymer towards equilibrium and the strength of the coupling with the surrounding fluid. The MD timestep is a fraction of the timestep for the LB part Δt = mΔtMD , where m is a constant typically set at m = 5. With this parametrization, the process falls in the fast translocation regime, where

Multiscale Modeling of Biopolymer Translocation Through a Nanopore

789

the total translocation time is much smaller than the Zimm relaxation time. We refer to this set of parameters as our “reference”; we explore the effect of the most important parameters for certain representative cases.

3

Translocation Dynamics

Extensive simulations of a large number of translocation events over 100 − 1000 initial polymer configurations for each length confirm that most of the time during the translocation process the polymer assumes the form of two almost compact blobs on either side of the wall: one of them (the untranslocated part, denoted by U ) is contracting and the other (the translocated part, denoted by T ) is expanding. Snapshots of a typical translocation event shown in Fig. 1 strongly support this picture. A radius of gyration RI (t) (with I = U, T ) is assigned to each of these blobs, following a static scaling law with the number of beads NI : RI (t) ∼ NIν (t) with ν  0.6 being the Flory exponent for a threedimensional self-avoiding random walk. Based on the conservation of polymer length, NU + NT = Ntot , an effective translocation radius can be defined as RE (t) ≡ (RT (t)1/ν + RU (t)1/ν )ν . We have shown that RE (t) is approximately constant for all times when the static scaling applies, which is the case throughout the process except near the end points (initiation and completion of the event) [2]. At these end points, deviations from the mean field picture, where the polymer is represented as two uncorrelated compact blobs, occur. The volume of the polymer also changes after its passage through the pore. At the end, the radius of gyration is considerably smaller than it was initially: RT (tX ) < RU (0), where tX is the total translocation time for an individual event. For our reference simulation an average over a few hundreds of events for N = 200 beads showed that λR = RT (tX )/RU (0) ∼ 0.7. This reveals the fact that as the polymer passes through the pore it is more compact than it was at the initial stage of the event, due to incomplete relaxation. The variety of different initial polymer realizations produce a scaling law dependence of the translocation times on length [8]. By accumulating all events for each length, duration histograms were constructed. The resulting distributions deviate from simple gaussians and are skewed towards longer times (see Fig. 2(a) inset). Hence, the translocation time for each length is not assigned to the mean, but to the most probable time (tmax ), which is the position of the maximum in the histogram (noted by the arrow in the inset of Fig. 2(a) for the case N = 200). By calculating the most probable times for each length, a superlinear relation between the translocation time τ and the number of beads N is obtained and is reported in Fig. 2(a). The exponent in the scaling law τ (N ) ∼ N α is calculated as α ∼ 1.28 ± 0.01, for lengths up to N = 500 beads. The observed exponent is in very good agreement with a recent experiment on double-stranded DNA translocation, that reported α  1.27 ± 0.03 [9]. This agreement makes it plausible that the generic polymers modeled in our simulations can be thought of as DNA molecules; we return to this issue in section 5.

M. Fyta et al.

4

10

ΔMD=1 ΔMD=5 ΔMD=10 ΔMD=20

counts

translocation time τ

790

5000

7500 10000

time

γ=0.05 γ=0.1 γ=0.5

3

10

100

(b)

10 100 number of beads N

(c)

10

100 t

(a)

10

Fig. 2. (a) Scaling of τ with the number of beads N . Inset: distribution of translocation times over 300 events for N = 200. Time is given in units of the LB timestep. The arrow shows the most probable translocation time for this length. Effect of the various parameters on the scaling law: (b) changing the value of the MD timestep (ΔtM D ); (c) changing the value of the solvent-molecule coupling coefficient γ.

4

Effects of Parameter Values

We next investigate the effect that the various parameters have on the simulations, using as standard of comparison the parameter set that we called the “reference” case. For all lengths and parameters about 100 different initial configurations were generated to assess the statistical and dynamical features of the translocation process. As a first step we simulate polymers of different lengths (N = 20 − 200). Following a procedure similar to the previous section we extract the scaling laws for the translocation time and their vatiation with the friction coefficient γ and the MD timestep ΔtMD . The results are shown in Fig. 2(b) and (c). In these calculations the error bars were also taken into account. The scaling exponent for our reference simulation (γ = 0.1) presented in Fig. 2(a) is α  1.27 ± 0.01 when only the lengths up to N = 200 are included. The exponent for smaller damping (γ = 0.05) is α  1.32 ± 0.06, and for larger (γ = 0.5) α  1.38 ± 0.04. By increasing γ by one order of magnitude the time scale rises by approximately one order of magnitude, showing an almost linear dependence of the translocation time with hydrodynamic friction; we discuss this further in the next section. However, for larger γ, thus overdamped dynamics and smaller influence of the driving force, the deviation from the α = 1.28 exponent suggests a systematic departure from the fast translocation regime. Similar analysis for various values of ΔtMD shows that the exponent becomes α  1.34 ± 0.04 when ΔtMD is equal to the LB timestep (m = 1); for m = 10 the exponent is α  1.32 ± 0.04, while for m = 20, α  1.28 ± 0.01 with similar prefactors. We next consider what happens when we fix the length to N = 200 and vary γ and the pulling force Fdrive . For all forces used, the process falls in the fast translocation regime. The most probable time (tmax ) for each case was calculated and the results are shown in Fig. 3. The dependence of tmax on γ is linear related to the linear dependence of τ on γ, mentioned in the previous section. μ , The variation of tmax with Fdrive follows an inverse power law: tmax ∼ 1/Fdrive

Multiscale Modeling of Biopolymer Translocation Through a Nanopore

Fdrive=0.02

γ=0.1

2e+04

tmax

tmax

5e+04

2e+04

1e+04

(a) 0 0.00

0.20

0.40

γ

0.60

791

(b) 0

0.80

0

0.05

0.1

0.15

Fdrive

Fig. 3. Variation of tmax with (a) γ, and (b) Fdrive for N = 200 beads

with μ of the order 1. The effect of γ is further explored in relation to the effective radii of gyration RE , and is presented in Fig. 4. The latter must be constant when the static scaling R ∼ N 0.6 holds. This is confirmed for small γ up to about 0.2. As γ increases, RE is no more constant with time, and shows interesting behavior: it increases continuously up to a point where a large fraction of the chain has passed through the pore and subsequently drops to a value smaller than the initial RU (0). Hence, as γ increases large deviations from the static scaling occur and the translocating polymer can no longer be represented as two distinct blobs. In all cases, the translocated blob becomes more compact. For all values of γ considered, λR is always less than unity ranging from 0.7 (γ=0.1) to 0.9 (γ=0.5) following no specific trend with γ. 1.6 1.2 γ=0.07 γ=0.1 γ=0.2 γ=0.3 γ=0.5

RE 0.8 0.4 0

0

0.2

0.4 0.6 scaled time

0.8

1

Fig. 4. The dependence of the effective radii of gyration RE (t) on γ (N = 200). Time and RE are scaled with respect to the total translocation time and RU (0) for each case.

5

Mapping to Real Biopolymers

As a final step towards connecting our computer simulations to real experiments and after having established the agreement in terms of the scaling behavior, we investigate the mapping issue of the polymer beads to double-stranded DNA. In order to interpret our results in terms of physical units, we turn to the persistence length (lp ) of the semiflexible polymers used in our simulations. Accordingly, we use the formula for the fixed-bond-angle model of a worm-like chain [10]: b (3) lp = 1 − cosθ

792

M. Fyta et al.

where θ is complementary to the average bond angle between adjacent bonds. In lattice units (Δx) an average persistence length for the polymers considered, was found to be approximately 12. For λ-phage DNA lp ∼ 50 nm [11] which is set equal to lp for our polymers. Thereby, the lattice spacing is Δx ∼ 4 nm, which is also the size of one bead. Given that the base-pair spacing is ∼ 0.34 nm, one bead maps approximately to 12 base pairs. With this mapping, the pore size is about ∼ 12 nm, close to the experimental pores which are of the order of 10 nm. The polymers presented here correspond to DNA lengths in the range 0.2 − 6 kbp. The DNA lengths used in the experiments are larger (up to ∼ 100kbp); the current multiscale approach can be extended to handle these lengths, assuming that appropriate computational resources are available. Choosing polymer lengths that match experimental data we compare the corresponding experimental duration histograms (see Fig. 1c of Ref. [9]) to the theoretical ones. This comparison sets the LB timestep to Δt ∼ 8 nsec. In Fig. 5 the time distributions for representative DNA lengths simulated here are shown. In this figure, physical units are used according to the mapping described above and promote comparison with similar experimental data [9]. The MD timestep for m = 5 will then be tMD ∼ 40 nsec indicating that the MD timescale related to the coarse-grained model that handles the DNA molecules is significantly stretched over the physical process. Exact match to all the experimental parameters is of course not feasible with coarse-grained simulations. However, essential features of DNA translocation are captured, allowing the use of the current approach to model similar biophysical processes that involve biopolymers in solution. This can become more efficient by exploiting the freedom of further fine-tuning the parameters used in this multiscale model.

0.6kbp counts

1.2kbp 2.4kbp

0

3.6kbp

80

4.8kbp

time (μsec)

6kbp

160

240

Fig. 5. Histograms of calculated translocation times for a large number of events and different DNA lengths. The arrows link to the most probable time (tmax ) for each case.

6

Conclusions

In summary, we applied a multiscale methodology to model the translocation of a biopolymer through a nanopore. Hydrodynamic correlations between the polymer and the surrounding fluid have explicitly been included. The polymer obeys a static scaling except near the end points for each event (initiation and completion of the process) and the translocation times vary exponentially with

Multiscale Modeling of Biopolymer Translocation Through a Nanopore

793

the polymer length. A preliminary exploration of the effects of the most important parameters used in our simulations was also presented, specifically the values of the friction coefficient and the pulling force describing the effect of the external electric field that drives the translocation. These were found to significantly affect the dynamic features of the process. Finally, our generic polymer models were directly mapped to double-stranded DNA and a comparison to experimental results was discussed. Acknowledgments. MF acknowledges support by Harvard’s Nanoscale Science and Engineering Center, funded by NSF (Award No. PHY-0117795).

References 1. Wolf-Gladrow, D. A.: Lattice gas cellular automata and lattice Boltzmann models. Springer Verlag, New York 2000; Succi, S.: The lattice Boltzmann equation. Oxford University Press, Oxford 2001; Benzi, R. Succi, S., and Vergassola, M.:, The lattice Boltzmann-equation - Theory and applications. Phys. Rep. 222 (1992) 145–197. 2. Fyta, M. G., Melchionna, S., Kaxiras, E., and Succi, S.: Multiscale coupling of molecular dynamics and hydrodynamics: application to DNA translocation through a nanopore. Multiscale Model. Simul. 5 (2006) 1156–1173. 3. Ahlrichs, P. and Duenweg, B.: Lattice-Boltzmann simulation of polymer-solvent systems. Int. J. Mod. Phys. C 9 (1999) 1429–1438; Simulation of a single polymer chain in solution by combining lattice Boltzmann and molecular dynamics. J. Chem. Phys. 111 (1999) 8225–8239. 4. Lodish, H., Baltimore, D., Berk, A., Zipursky, S., Matsudaira, P., and Darnell, J.: Molecular Cell Biology, W.H. Freeman and Company, New York (1996). 5. Kasianowicz, J. J., et al: Characterization of individual polynucleotide molecules using a membrane channel. Proc. Nat. Acad. Sci. USA 93 (1996) 13770–13773; Meller, A., et al: Rapid nanopore discrimination between single polynucleotide molecules. 97 (2000) 1079–1084; Li, J., et al: DNA molecules and configurations in a solid-state nanopore microscope. Nature Mater. 2 (2003) 611–615. 6. Sung, W. and Park, P. J.: Polymer translocation through a pore in a membrane. Phys. Rev. Lett. 77 (1996) 783–786. 7. Matysiak, S., et al: Dynamics of polymer translocation through nanopores: Theory meets experiment. Phys. Rev. Lett. 96 (2006) 118103. 8. Lubensky, D. K. and Nelson, D. R.: Driven polymer translocation through a narrow pore. Biophys. J. 77 (1999) 1824–1838. 9. Storm, A. J. et al: Fast DNA translocation through a solid-state nanopore. Nanolett. 5 (2005) 1193–1197. 10. Yamakawa, H.: Modern Theory of Polymer Solutions, Harper & Row, NY 1971. 11. Hagerman, P. J.: Flexibility of DNA. Annu. Rev. Biophys. Biophys. Chem. 17 (1988) 265–286; Smith, S., Finzi, L., and Bustamante, C.: Direct mechanical measurement of the elasticity of single DNA molecules by using magnetic beads. Science 258 (1992), 1122–1126.

Multi-physics and Multi-scale Modelling in Cardiovascular Physiology: Advanced User Methods for Simulation of Biological Systems with ANSYS/CFX V. Díaz-Zuccarini1, D. Rafirou2, D.R. Hose1, P.V. Lawford1, and A.J. Narracott1 1

University of Sheffield, Academic Unit of Medical Physics, Royal Hallamshire Hospital, Glossop Road, S10 2JF, Sheffield, UK 2 Electrical Engineering Department / Biomedical Engineering Center, Technical University of Cluj-Napoca, 15, C. Daicoviciu Street, 400020 Cluj-Napoca, Romania [email protected]

Abstract. This work encompasses together a number of diverse disciplines (physiology, biomechanics, fluid mechanics and simulation) in order to develop a predictive model of the behaviour of a prosthetic heart valve in vivo. The application of simulation, for the study of other cardiovascular problems, such as blood clotting is also discussed. A commercial, finite volume, computational fluid dynamics (CFD) code (ANSYS/CFX) is used for the 3D component of the model. This software provides technical options for advanced users which allow userspecific variables to be defined that will interact with the flow solver. Userdefined functions and junction boxes offer appropriate options to facilitate multi-physics and multi-scale complex applications. Our main goal is to present a 3D model using the internal features available in ANSYS/CFX coupled to a multiscale model of the left ventricle to address complex cardiovascular problems. Keywords: Multi-scale models, cardiovascular modelling & simulation, mechanical heart valves, coupled 3D-lumped parameter models, simulation software.

1 Introduction Modelling and simulation can be used to explore complex interactions which occur in the human body. In order to model these biological processes, a multi-scale approach is needed. One of the “success stories” in bioengineering is the study of the fluid dynamics of blood (haemodynamics) within the cardiovascular system and the relationship between haemodynamics and the development of cardiovascular disease [1]. Cardiovascular models present a particular challenge in that they require both a multi-scale and a multi-physics approach. Using finite elements for the whole system is computationally prohibitive thus, a compromise is needed. The most sophisticated fluid-solid interaction structures [2] provide exquisite detail in the fluid domain, but are limited in that the boundary conditions are prescribed. An alternative multiscale solution is to couple lumped parameter models of the boundary conditions with a finite element model of the part where detail and accuracy are needed [3]. Significant improvement can be made in terms of the understanding of the underlying physics if the lumped parameter approach includes more physiologically representative Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 794 – 801, 2007. © Springer-Verlag Berlin Heidelberg 2007

Multi-physics and Multi-scale Modelling in Cardiovascular Physiology

795

mechanisms rather than traditional “black-box” models. Usually, boundary conditions are expressed in terms of pressure and flow. These are the macroscopic expression of physiological or pathological conditions. These macroscopic variables can be related to the microscopic level of the physiology/pathology through molecular/cellular aspects of the process with the definition of a new range of variables. These new variables will not be available to the user as normal boundary conditions, as they are totally dependant on the level of modelling chosen for the problem under study.

2 ANSYS/CFX Special Features: Crossing the Flow Interface Towards Biological Modelling ANSYS/CFX is highly specialized software, built in such a way that interaction with specific features of the solver is kept to a minimum. Whilst this is appropriate for many applications, in order to run multi-scale applications, the user must be able to interact with the solver at a lower level. The user needs to be able to define where, when and how the equations and variables imposed as boundaries (i.e. mathematical models) will participate in the solution. In this paper we describe two options which ANSYS/CFX provides for advanced users: Command Expression Language (CEL) functions/subroutines and Junction Boxes. We will use CEL functions/subroutines at the flow interface to solve the equations of the boundary conditions at different scales and Junction Boxes to provide a structure to update the variables. Initialisation Subroutine (Junction Box)

Start of Timestep Subroutine(Junction Box)

Location specified in CFX: Start of Run Description: Definition of data areas and variables. Initial values and/or values for a restarted run Location specified in CFX: Start of Time step Description: Update/storage of variables @tstep

Start of coefficient Loop Linear solution and User CEL Function(s) End of Coef. Loop Subroutine (Junction Box)

Description: Solver. CEL functions will be accessed every time information about the boundaries is needed. Location specified in CFX: End of Coefficient Loop Description: Update of variables @coefficient loop

End of run End of Timestep User Output Subroutine (Junction Box)

Location specified in CFX: User Output Description: Output files and values

Fig. 1. Structure of an ANSYS/CFX model using Junction Boxes and CEL Subroutines

Definitions [4]: • CEL Subroutines: These are used in conjunction with User CEL Functions to define quantities in ANSYS CFX-Pre based on a FORTRAN subroutine. The User CEL Routine is used to set the calling name, the location of the Subroutine and the location of the Shared Libraries associated with the Subroutine.

796

V. Díaz-Zuccarini et al.

• CEL Functions: User CEL Functions are used in conjunction with User CEL Routines. User Functions set the name of the User CEL Routine associated with the function, and the input and return arguments. • Junction Box Routines: Are used to call user-defined FORTRAN subroutines during execution of the ANSYS CFX-Solver. A Junction Box Routine object must be created so that the ANSYS CFX-Solver knows the location of the Subroutine, the location of its Shared Libraries and when to call the Subroutine. A general form of a transient and structured model in ANSYS/CFX is shown in Fig. 1. When solving differential or partial differential equation models as boundary conditions for an ANSYS/CFX model the equations describing the boundary condition must be discretized in such a way that variables are solved and passed to ANSYS/CFX at each time-step and updated within coefficient loops. Using ANSYS/CFX functions, the value of physical quantities at specific locations of the 3D modelare passed as arguments to the boundary condition model and a specific value is returned to ANSYS/CFX to update the boundary condition for that region. Using this approach, the boundary conditions of the 3D model are updated at each coefficient loop of the flow solver, providing close coupling of the 3D ANSYS/CFX model and the boundary condition model.

3 Case Study: Coupling a Model of the Left Ventricle to an Idealized Heart Valve

Flow/Pressure

3D Model Mitral Lower level: Contractile proteins: actin and myosin slide over one another exerting a muscular force and producing contraction of the sarcomere.

actin

Middle level: Tension in the cardiac muscle is developed by muscle fibers. Upper level: developed Tension in the muscle is transformed into pressure myosin

Levels

Flow/Pressure

Flow/Pressure

Contractile Proteins Sarcomere

Flow/Pressure

Output model Flow interface

Flow interface

Input model

M ocardium Heart

This section describes the development of more complex boundary conditions to represent the behaviour of the left ventricle (LV). We also describe the coupling of the LV model to a fluid-structure interaction (FSI) model of an idealized heart valve.

Fig. 2. Representation of the LV model and its different physical scales. Cardiac contraction starts at protein level in the sarcomere. Protein contraction produces tension in the cardiac wall that then is translated into pressure.

Multi-physics and Multi-scale Modelling in Cardiovascular Physiology

797

Fig. 2 shows the sub-levels of organisation of the ventricle that are taken into account to give a “more physiologically realistic” model of the LV. Following the procedure described in ex vivo isolated LV preparations, a constant pressure source is connected to the ventricle via the mitral valve (input model). The blood fills the LV (output model) and ejects a volume of blood into the arterial network via the aortic valve. Contraction in the cardiac muscle is described starting from the microscopic level of contractile proteins (actin and myosin), up to the tissue (muscle level) and to the LV level to finally reach the haemodynamic part of the LV and its arterial load. 3.1 A Multi-scale Model of the Left Ventricle The contractile proteins (actin and myosin) acting at the level of the sarcomere produce cardiac contraction. These slide over each other [5], attaching and detaching by means of cross-bridges, to produce shortening of the sarcomere and hence contraction of the ventricular wall (Fig. 2). To represent the chemical reaction from the detached state (XD) to the attached one (XA) and vice-versa we use the simplest model available: ka



XD← X A

(1)

kd

The periodically varying kinetic constants of attachment and detachment are ka and kd. The chemical potential of a thermodynamic system is the amount by which the energy of the system would change if an additional particle were introduced, with the entropy and volume held fixed. The modified Nernst formula provides a first approximation:

μ a = ( Aa + Ba (T ) ln( X A ))β (l m )

(2)

μ d = Ad + Bd (T ) ln ( X D )

(3)

The only non-classical feature is the β factor of equation (2). For a more detailed explanation, the reader is referred to [6],[7]. Using the usual expression of the reaction rate in chemical thermodynamics, the reaction flow may be expressed as: dX A = k A (t ) ⋅ e ( μ d − Ad ) / Bd − k D (t ) ⋅ e ( μa − βAa ) / Ba dt

(4)

This can be reduced to the following expression: dX A β = k A (t )(1 − X A (t )) − k D (t )X D (t ) dt

(5)

We can now relate the chemical and mechanical aspects of the model, to describe the chemo-mechanic transduction in the cardiac muscle. A single muscle equivalent to all the muscles of the heart is considered. Its length is lm and XA is its number of attached cross bridges. Neglecting the energy in free cross bridges, the energy of the muscle is given by: E m = E m (l m , X A )

(6)

798

V. Díaz-Zuccarini et al.

The mechanical power is obtained by derivation: Pm =

∂E m dl m ∂E m dX A + = f m v m + μ a X A ∂l m dt ∂X a dt

(7)

As shown in equation (7), the mechanical force is obtained by derivation of the energy equation of the muscle with respect to the attached cross-bridges (EX) and the length of muscle fibers (lm). f m = E X ( X A )E ' a (l m ) + (1 − E X ( X A ))E ' r (l m )

(8)

E’a and E’r are respectively, the derivative of the energy of the muscle in active (a) and resting (r) state respect to lm and they are expressed as functions of the muscular force fm. Full details of the formulation and assumptions may be found in [6],[7]. In our model, the cardiac chamber transforms muscular force into pressure and volumetric flow into velocity of the muscle. We assume an empiric relationship ϕG between volume QLV and equivalent fibre length lm

QLV (t ) = ϕ G (l m (t )) ↔ l m (t ) = ϕ G−1 (Q LV (t ))

(9)

Assuming ϕG is known and by differentiation of (9), a relationship between the velocity of the muscle dlm/dt and the ventricular volumetric flow dQLV/dt is obtained:

dQlv (t ) dl =ψ G m dt dt

(10)

Where ψG =ψG(QLV) is the derivative of ϕG with respect to lm. If the transformation is power-conservative, mechanical power (depending on muscular force fm) =hydraulic power (depending on pressure P).

P⋅

dQlv dl = N ⋅ fm ⋅ m dt dt

(11)

Where we have supposed that pressure P is created from the forces of N identical fibres. Equation (12) is then obtained:

P = ψ G 1(lm ) N. fm

(12)

A simple multi-scale model of the left ventricle has been presented, starting from the mechanisms of cardiac contractions at the protein scale. In our case, the ventricular pressure P defined (12) will be the input to the 3D model shown in the next section. 3.2 Computational Model of the Mitral Valve and Coupling with the LV Model

The computational approach presented here is based on the use of a fluid-structure interaction model, built within ANSYS/CFX v10. It describes the interaction of a blood-like fluid flowing under a pressure gradient imposed by ventricular contraction, and the structure of a single leaflet valve. A generic CAD model, consisting of a flat leaflet (the occluder) which moves inside a mounting ring (the housing), was built to represent the geometry of the valve (Fig. 3). The thickness of the disc-shaped

Multi-physics and Multi-scale Modelling in Cardiovascular Physiology

799

occluder is h=2 mm and its diameter is d=21 mm. In the fully closed position, the gap between the mounting ring and the occluder is g=0.5 mm. The movement of the occluder is considered to be purely rotational, acting around an eccentric axis situated at distance from its centroid (OX axis). The occluder travels 80 degrees from the fully open to the fully closed position,. To complete the CAD model, two cylindrical chambers were added to represent the atrium and ventricle. The atrial chamber, positioned in the positive OZ direction, is 66 mm in length whilst the ventricular chamber, positioned in the negative OZ direction, is 22 mm in length. Only half of the valve was considered in the 3D model. Blood is considered an incompressible Newtonian fluid with density ρ =1100 kg/m3 and dynamic viscosity μ=0.004 kg/ms. The unsteady flow field inside the valve is described by the 3D equations of continuity and momentum (13) with the boundary conditions suggested by Fig. 3 (left). G ∂u G G G G + ( u ⋅∇ ) u ρ = −∇p + μΔu ; ∇ ⋅ u = 0 ρ (13) ∂t

Ventricular model: P

leaflet

Constant pressure

Fig. 3. Left: CAD model and boundary conditions: Snapshot of CFXPre implementation. At the inlet the ventricular pressure (P) is coming from the ventricular model. A constant pressure is applied at the outlet. Right: General representation of a rigid body demonstrating the principles of fluid-structure interaction and rotation of a leaflet. The variables are defined in the text.

The principle used to model the fluid-structure interaction is outlined in Fig 3 (right). The occluder rotates under the combined effects of hydrodynamic, buoyancy and graviG G tational forces acting on it. At every time step, the drag, f Pz , and lift, f Py , forces exerted by the flowing fluid on an arbitrary point P of the leaflet’s surface, are reduced to the G G centroid, G. The total contribution of drag, FGz , and lift FGy are added to the difference between the gravitational and buoyancy forces. Conservation of the kinetic moment is imposed, resulting in the dynamic equation that describes the motion. dθ = ϖ old dt +

M 2 dt J

(14)

800

V. Díaz-Zuccarini et al.

In equation 14, dθ represents the leaflet’s angular step, dt is the time step, ϖ old is the angular velocity at the beginning of the current time step and M is the total momentum acted on the leaflet from the external forces:

M = rG ⎡⎣ FGz sin θ + ( FGy − G + A) cos θ ⎤⎦

(15)

The total moment of inertia is J=6.10-6 kg.m2. Some simulation results of the model are shown in Figure 4: 20-sim 3.6 Viewer (c) CLP 2006 120

Pressure [mmHg]

80

degrees (theta)

70 60 50 40 30 20 10

100

124.615

80

109.231

60

93.8462

40

78.4615

20

63.0769

0

0 -0,005

140

0,005

0,015

0,025

0,035

0,045

0,055

Volume [ml]

90

47.6923 0

0.2

0.4 0.6

time [s]

0.8

1

1.2 1.4

1.6

time [s]

Fig. 4. Left: Closure degree vs. time. Right: Ventricular pressureand Volume vs. time.

It is interesting to notice that from the implementation point of view, closure is continuous and smooth, although when compared to experimental data, closure time is too long [8] and closure is reported too early during systole. This could be due to several reasons, including an insufficient pressure rise and/or an overestimation of the mass of the occluder. This issue is being addressed and will be the object of a future publication. As automatic remeshing tools are not available in the software, remeshing was carried out by hand and the simulation was stopped several times before full closure. Re-start and interpolation tools are available within the software and are appropriate to tackle this particular problem. A restriction in the use of the commercial software chain is that efficient implementation of an automatic remeshing capability is very difficult without close collaboration with the software developers.

4 Towards Other Biological Applications Another feature of the ANSYS/CFX solver enables the user to define additional variables which can be used to represent other properties of the flow field. Previous work described the use of such additional variables to model the change in viscosity of blood during the formation of blood clots [9]. This is based on the work of Friedrich and Reininger [10] who proposed a model of the clotting process based on a variable blood viscosity determined by the residence time of the blood t, the viscosity at time 0, μ0, and the rate constants k1 and k2, which are dependent on thrombin concentration. Fluid residence time can be modelled within ANSYS/CFX by the introduction of an additional variable which convects throughout the fluid domain. A source term of 1 is defined for the additional variable, resulting in the value of the additional

Multi-physics and Multi-scale Modelling in Cardiovascular Physiology

801

variable for each element of the fluid increasing by a unit for every unit of time that it remains within the domain. A model of blood clotting was implemented within CFX 5.5.1 using an additional variable, labelled AGE, to represent the residence time. Due to the method of implementation in CFX, the residence time is expressed as a density and has units of kgm-3. Additional user functions allow the viscosity of the fluid to be defined as a function of the additional variable. Whilst this is a very simple model, it demonstrates the power of custom routines that can be utilized in such modeling. This technique has possible applications for other convective-diffusive processes.

5 Conclusions and Perspectives In this paper, we presented a multi-scale and fully coupled model of the LV with a 3D model of the mitral valve in ANSYS/CFX. The preliminary results are encouraging. Use of advanced features of the software allows describing biological applications. A second application demonstrates the versatility of the software.

Acknowledgement The authors acknowledge financial support under the Marie Curie project C-CAReS and thank Dr I. Jones and Dr J. Penrose, ANSYS-CFX, for constructive advice and training.

References 1. Taylor, C.A., Hughes, T.J., Zarins, C.K. Finite element modeling of three-dimensional pulsatile flow in the abdominal aorta: relevance to atherosclerosis. Annals of Biomedical Engineering (1998) 975–987. 2. De Hart J., Peters G.., Schreurs P., Baaijens F. A three-dimensional computational analysis of fluid–structure interaction in the aortic valve. J. of Biomech, (2003)103-112. 3. Laganà K, Balossino R, Migliavacca F, Pennati G, Bove E.L, de Leval M, Dubini G. Multiscale modeling of the cardiovascular system: application to the study of pulmonary and coronary perfusions in the univentricular circulation. J. of Biomec.(2005)1129-1141. 4. ANSYS/CFX 10.0 manual, © 2005 ANSYS Europe Ltd (http://www.ansys.com). 5. Huxley AF. Muscle structures and theories of contraction. Prog Biophys. Chem, (1957) 279-305. 6. Díaz-Zuccarini V, ,LeFèvre J. An energetically coherent lumped parameter model of the left ventricle specially developed for educational purposes. Comp in Biol and Med. (In press). 7. Díaz-Zuccarini, V. Etude des Conditions d’efficacite du ventricule gauche par optimisation teleonomique d’un modele de son fonctionnement. PhD Thesis, EC-Lille, (2003) 8. Chandran K. B, Dexter, E. U, Aluri S, Richenbacher W. Negative Pressure Transients with Mechanical Heart-Valve Closure: Correlation between In Vitro and In Vivo Results. Annals of Biomedical Engineering. Volume 26, Number 4 / July, (1998) 9. Narracott A, Smith S, Lawford P, Liu H, Himeno R, Wilkinson I, Griffiths P, Hose R. Development and validation of models for the investigation of blood clotting in idealized stenoses and cerebral aneurysms. J Artif Organs. (2005) 56-62. 10. Friedrich P, Reininger AJ. Occlusive thrombus formation on indwelling catheters: In vitro investigation and computational analysis. Thrombosis and Haemostasis (1995) 66-72.

Lattice Boltzmann Simulation of Mixed Convection in a Driven Cavity Packed with Porous Medium Zhenhua Chai1 , Zhaoli Guo1 , and Baochang Shi2, 1 State Key Laboratory of Coal Combustion, Huazhong University of Science and Technology, 430074 Wuhan P.R. China [email protected], [email protected] 2 Department of Mathematics, Huazhong University of Science and Technology, 430074 Wuhan P.R. China [email protected]

Abstract. The problem of mixed convection in a driven cavity packed with porous medium is studied with lattice Boltzmann method. A lattice Boltzmann model for incompressible flow in porous media and another thermal lattice Boltzmann model for solving the energy equation are proposed based on the generalized volume-averaged flow model. The present models have been validated by simulating mixed convection in a driven cavity (without porous medium) and it is found that the numerical results predicted by present models are in good agreement with available data reported in previous studies. Extensive parametric studies on mixed convection in a driven cavity filled with porous medium are carried out for various values of Reynolds number, Richardson number and Darcy number. It is found that the flow and temperature patterns change greatly with variations of these parameters. Keywords: Lattice Boltzmann method; Mixed convection; Porous medium.

1

Introduction

Fluid flow and heat transfer in a driven cavity have recently received increasing attention because of its wide applications in engineering and science [1,2]. Some of these applications include oil extraction, cooling of electronic devices and heat transfer improvement in heat exchanger devices [3]. From a practical point of view, the research on mixed convection in a driven cavity packed with porous medium is motivated by its wide applications in engineering such as petroleum reservoirs, building thermal insulation, chemical catalytic reactors, heat exchangers, solar power collectors, packed-bed catalytic reactors, nuclear energy 

Corresponding author.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 802–809, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Lattice Boltzmann Simulation of Mixed Convection

803

systems and so on [3,4]. These important applications have led to extensive investigations in this area [3,5,6,7]. In this paper, the problem of mixed convection in a driven cavity packed with porous medium is studied with the lattice Boltzmann method (LBM). The aim of the present study is to examine the effects of Reynolds number (Re), Richardson number (Ri) and Darcy number (Da) on characteristics of the flow and temperature fields. The numerical results in present work indicate that the flow and temperature patterns change greatly with variations of the parameters mentioned above. Furthermore, as these parameters are varied in a wide range, some new phenomena is observed.

2

Numerical Method: The Lattice Boltzmann Method

In the past two decades, the LBM achieved great success in simulating complex fluid flows and transport phenomena since its emergence [8]. In the present work, the LBM is extended to study mixed convection in a driven cavity filled with porous medium. The dimensionless generalized volume-averaged Navier-Stokes equations and energy equation are written as [9,10] ∇ · u = 0, u 1 ∂u + u · ∇( ) = −∇(p) + ∇2 u + F, ∂t  Ree 1 ∂T + u · ∇T = ∇2 T, σ ∂t P rRe

(1) (2) (3)

where u and p are the volume-averaged velocity and pressure, respectively;  is the porosity of the medium, Ree is the effective Reynolds number, P r is the Prandtl number, σ =  + (1 − )ρs cps /ρf cpf represents the ratio between the heat capacities of the solid and fluid phases, with ρs (ρf ) and cps (cpf ) being the  density and capacity of the solid (fluid) phase, respectively. F = − DaRe u− F Gr  √ |u|u + kT , where Da is the Darcy number, Re is the Reynolds number, 2 Ree Da which is assumed to equal Ree in the present work, we would like to point out that this assumption is widely used in engineering; Gr is the Grashof number, Gr for simplicity, the Richardson number (Ri) is introduced, and defined as Re 2, k e is unit vector in the y-direction, F is geometric function, and defined as [11] 1.75 F = √ . 150 × 3 The evolution equations of the single particle density distribution and temperature distribution can be written as [10,12] 1 (eq) [fi (x, t) − fi (x, t)] + δtFi , τf 1 (eq) gi (x + ci δt, t + δt) − gi (x, t) = − [gi (x, t) − gi (x, t)], τg

fi (x + ci δt, t + δt) − fi (x, t) = −

(4) (5)

804

Z. Chai, Z. Guo, and B. Shi

where δt is time step, fi (x, t) and gi (x, t) are density distribution function and temperature distribution function, respectively, τf and τg are the dimensionless (eq) (eq) relaxation times. fi (x, t), gi (x, t) are the equilibrium distribution functions corresponding to fi (x, t) and gi (x, t), which are given as ci · u (ci · u)2 |u|2 + − ], 2 4 cs 2cs 2c2s ci · u (eq) gi (x, t) = ωi T [σ + 2 ], cs (eq)

fi

(x, t) = ωi ρ[1 +

(6) (7)

where wi is weight coefficient and cs is the sound speed. Unless otherwise stated, σ in Eqs. (3) and (7) is assumed to equal 1, and the same treatment can be found in Ref. [9]. In the present work, we choose two-dimensional nine-bit model sin[(i− where the discrete velocities are given as c0 = (0, 0), ci = (cos[(i−1)π/2], √ 1)π/2])c (i = 1−4), ci = (cos[(2i−9)π/4], sin[(2i−9)π/4]) 2c (i = 5−8), where c = δx/δt,√δx is lattice spacing. The sound speed in this D2Q9 model is given as cs = c/ 3, and the weights are ω0 = 4/9, ωi = 1/9 (i = 1 − 4), ωi = 1/36 (i = 5 − 8). The forcing term Fi in Eq. (4) is given as [12] Fi = ωi ρ(1 −

1 ci · F uF : (ci ci − c2s I) )[ 2 + ]. 2τ cs c4s

(8)

The volume-averaged density and velocity are defined by ρ=

8  i=0

fi (x, t), u =

v  , d0 + d20 + d1 v

F δt √  where d0 , d1 and v are defined as d0 = 12 (1 + δt 2 DaRe ), d1 = 2 Da , ρv = 8 δt Gr i=0 ci fi + 2 ρ Ree kT . Through the Chapman-Enskog expansion, and in the incompressible limit, we can derive the macroscopic equations (1)-(3). The further detailed analysis on this procedure can be found in Refs. [10,12]. In addition, the boundary conditions should be treated carefully, here the non-equilibrium extrapolation scheme proposed in Ref. [13] is used, this is because this scheme has exhibited better numerical stability in numerical simulations.

3

Numerical Results and Discussion

The configuration described in present study is shown in Fig. 1. The geometry is a square cavity with a length L = 1. The cavity is packed with a porous material that is homogeneous and isotropic. The present models were validated by simulating mixed convection in a square cavity with a driving lid for Re = 100, 400, 1000 and the corresponding Ri = 1 × 10−2 , 6.25 × 10−4 , 1 × 10−4 . The results were compared with available data

Lattice Boltzmann Simulation of Mixed Convection

805

Fig. 1. The configuration of the problem under consideration

reported in previous studies in the absence of porous medium and heat generation for P r = 0.71. It should be noted that the solution of mixed convection in the cavity without porous medium can be derived if Da → ∞,  → 1 and σ = 1. As shown in Tables 1 and 2, the present numerical results are in good agreement with those reported in previous studies. In the following parts, we will focus on Table 1. Comparisons of the average Nusselt number (N u) at the top wall between the present work and that reported in previous studies Re Present Ref.[2] 100 1.97 1.94 400 4.03 3.84 1000 6.56 6.33

Ref.[6] FIDAP[6] Ref.[5] 2.01 1.99 2.01 3.91 4.02 3.91 6.33 6.47 6.33

Table 2. Comparisons of the maximum and minimum values of the horizontal and vertical velocities at the center lines of the cavity between the present work and those reported in previous studies

umin umax vmin vmax

Ri = 1.0 × 10−2 Present Ref.[2] Ref.[6] Ref.[5] -0.2079 -0.2037 -0.2122 -0.2122 1.0000 1.0000 1.0000 1.0000 -0.2451 -0.2448 -0.2506 -0.2506 0.1729 0.1699 0.1765 0.1765

Ri = 6.25 × 10−4 Present Ref.[2] Ref.[6] Ref.[5] -0.3201 -0.3197 -0.3099 -0.3099 1.0000 1.0000 1.0000 1.0000 -0.4422 -0.4459 -0.4363 -0.4363 0.2948 0.2955 0.2866 0.2866

806

Z. Chai, Z. Guo, and B. Shi

investigating the effect of variations of parameters, including Re, Ri and Da, on flow and temperature fields. The porosity of the medium is set to be 0.5, and the Prandtl number is set to equal 0.71. Numerical simulations are carried out on a 257×257 lattice. 3.1

Effect of the Reynolds Number (Re)

The range of Re is tested from 400 to 5000 under the condition of Ri = 0.0001 and Da = 0.01. As shown in Fig. 2, the variation of Re has an important impact on flow and temperature fields. It is found that the qualitative character of flow is similar to the convectional lid-driven cavity flow of non-stratified fluid, a primary vortex is formed in center region of the cavity, and small vortexes are visible near the bottom corners with increasing Re. For temperature field, it is observed that there is a steep temperature gradient in the vertical direction near the bottom, and a weak temperature gradient in the center region. It is important that the convective region is enlarged with Re increases. Re=1000

Re=400

Re=3000

Re=5000

200

200

200

200

150

150

150

150

Y

250

Y

250

Y

250

Y

250

100

100

100

100

50

50

50

50

50

100

150

200

250

50

100

X

150

200

250

50

100

X

150

200

250

50

150

150

150

150

100

100

100

100

50

50

50

50

150

X

200

250

50

100

150

X

200

250

200

250

150

200

250

Y

200

Y

250

200

Y

250

200

Y

250

200

100

150

X

250

50

100

X

50

100

150

X

200

250

50

100

X

Fig. 2. The streamlines (top) and isothermals (bottom) for Re=400, 1000, 3000, 5000

3.2

Effect of the Richardson Number (Ri)

The Richardson number is defined as the ratio of Gr/Re2 , which provides a measure of the importance of buoyancy-driven natural convection relative to lid-driven cavity force convection. It reflects a dominant conduction mode as Ri ≥ O(1), while it resembles similar driven cavity flow behavior for a nonstratified fluid if Ri ≤ O(1) [2,6]. In the present work, Ri is varied in the range

Lattice Boltzmann Simulation of Mixed Convection Ri=0.0001

Ri=10

Ri=1

Ri=0.01

200

200

200

200

150

150

150

150

Y

250

Y

250

Y

250

Y

250

100

100

100

100

50

50

50

50

50

100

150

200

250

50

100

X

150

200

250

50

100

X

150

200

250

50

200

200

150

150

150

100

100

100

100

50

50

50

50

150

200

250

50

100

150

200

250

200

250

150

200

250

Y

200

150

Y

200

Y

250

Y

250

X

150

X

250

100

100

X

250

50

807

50

X

100

150

X

200

250

50

100

X

Fig. 3. The streamlines (top) and isothermals (bottom) for Ri=0.0001, 0.01, 1, 10

of 0.0001 − 10 for Re = 100, Da = 0.01. As shown in Fig. 3, the flow and temperature fields change with the variations of Ri. As Ri ≤ 0.01, the buoyancy effect is overwhelmed by the mechanical effect of the sliding lid, and only a primary vortex close to the top boundary is observed. As Ri increases to 1.0, beside the primary vortex mentioned above, another vortex is formed near the bottom corners. However, with increasing Ri, the vortex near the bottom corner moves toward the geometric center, and finally comes to the center of the cavity as Ri is up to 10. It is also shown that, as Ri ≥ O(1), i.e., the effect of buoyancy is more prominent than mechanical effect of the sliding lid, the phenomena in driven cavity is more complex. The isothermals in Fig. 3 show the fact that heat transfer is mostly conductive in the middle and bottom parts of the cavity. The relative uniform temperature is only formed in a small region in the top portion of the cavity, where the mechanically induced convective activities are appreciable. However, it should be noted that this convective region is decreasing with increasing Ri. 3.3

Effect of the Darcy Number (Da)

The Da in the present work is varied in the range of 0.0001−0.1 for Ri = 0.0001, Re = 100. As shown in Fig. 4, the variation of Da significantly affects the flow and temperature fields. It is obvious that the increase of Da induces flow activity deeper into the cavity, which leads to more energy to be carried away from the sliding top wall toward bottom, and consequently, the convective region in the top portion of the cavity is enlarged. However, as Da is decreased to 0.0001, the primary vortex in the cavity is compelled to move toward left wall and a new

808

Z. Chai, Z. Guo, and B. Shi Da=0.0001

Da=0.01

Da=0.001

Da=0.1

200

200

200

200

150

150

150

150

Y

250

Y

250

Y

250

Y

250

100

100

100

100

50

50

50

50

50

100

150

200

250

50

100

X

150

200

250

50

100

X

150

200

250

50

200

200

200

150

150

150

150

100

100

100

100

50

50

50

50

150

200

250

X

50

100

150

X

200

250

200

250

150

200

250

Y

200

Y

250

Y

250

Y

250

100

150

X

250

50

100

X

50

100

150

X

200

250

50

100

X

Fig. 4. The streamlines (top) and isothermals (bottom) for Da=0.0001, 0.001, 0.01, 0.1

vortex near the bottom corner is formed. In fact, as Da is small enough, the effect of the nonlinear term in Eq. (2) is prominent, which may induces some new phenomena as observed in present work. Finally, we would like to point out that the numerical results derived in this paper qualitatively agree well with those reported in Ref. [5,6].

4

Conclusion

In the present work, the problem of mixed convection in a driven cavity filled with porous medium is studied with LBM. The influence of the Reynolds number, Richardson number and Darcy number on the flow and temperature fields are investigated in detail. As these parameters are varied in a wide range, some new phenomena is observed. Through comparisons with the existing literature, it is found that the LBM can be used as an alternative approach to study this problem. Compared with traditional numerical methods, LBM offers flexibility, efficiency and outstanding amenability to parallelism when modelling complex flows, and thus it is more suitable for computation on parallel computers; but to derive the same accurate results, larger number of grid may be needed. A recent comparison between LBM and finite different method for simulating natural convection in porous media can be found in Ref. [14]. Acknowledgments. This work is supported by the National Basic Research Program of China (Grant No. 2006CB705804) and the National Science Foundation of China (Grant No. 50606012).

Lattice Boltzmann Simulation of Mixed Convection

809

References 1. Shankar, P.N., Deshpande, M.D.: Fluid mechanics in the driven cavity. Annu. Rev. Fluid Mech. 32 (2000) 93–136 2. Iwatsu, R., Hyun, J.M., Kuwahara, K.: Mixed convection in a driven cavity with a stable vertical temperature gradient. Int. J. Heat Mass Transfer 36 (1993) 1601– 1608 3. Oztop, H.F.: Combined convection heat transfer in a porous lid-driven enclosure due to heater with finite length. Int. Commun. Heat Mass Transf. 33 (2006) 772– 779 4. Vafai, K.: Convective flow and heat transfer in variable-porosity media. J. Fluid Mech. 147 (1984) 233–259 5. Khanafer, K.M., Chamkha, A.J.: Mixed convection flow in a lid-driven enclosure filled with a fluid-saturated porous medium. Int. J. Heat Mass Transfer 42 (1999) 2465–2481 6. Al-Amiri, A.M.: Analysis of momentum and energy transfer in a lid-driven cavity filled with a porous medium. Int. J. Heat Mass Transfer 43 (2000) 3513–3527 7. Jue, T.C.: Analysis of flows driven by a torsionally-oscillatory lid in a fluidsaturated porous enclosure with thermal stable stratification. Int. J. Therm. Sci. 41 (2002) 795–804 8. Chen, S., Doolen, G.: Lattice Boltzmann method for fluid flow. Annu. Rev. Fluid Mech. 30 (1998) 329–364 9. Nithiarasu, P., Seetharamu, K.N., Sundararajan, T.: Natural convective heat transfer in a fluid saturated variable porosity medium. Int. J. Heat Mass Transfer 40 (1997) 3955–3967 10. Guo, Z., Zhao T.S.: A lattice Boltzmann model for convection heat transfer in porous media. Numerical Heat Transfer, Part B 47 (2005) 157–177 11. Ergun, S.: Fluid flow through packed columns. Chem. Eng. Prog. 48 (1952) 89–94 12. Guo, Z., Zhao, T.S.: Lattice Boltzmann model for incompressible flows through porous media. Phys. Rev. E 66 (2002) 036304 13. Guo, Z., Zheng, C., Shi, B.: Non-equilibrium extrapolation method for velocity and pressure boundary conditions in the lattice Boltzmann method. Chin. Phys. 11 (2002) 366–374 14. Seta, T., Takegoshi, E., Okui, K.: Lattice Boltzmann simulation of natural convection in porous media. Math. Comput. Simulat. 72 (2006) 195–200

Numerical Study of Cross Diffusion Effects on Double Diffusive Convection with Lattice Boltzmann Method Xiaomei Yu1 , Zhaoli Guo1 , and Baochang Shi2, 1 State Key Laboratory of Coal Combustion, Huazhong University of Science and Technology, Wuhan 430074 P.R. China yuxiaomei [email protected], [email protected] 2 Department of Mathematics, Huazhong University of Science and Technology, Wuhan 430074 P.R. China [email protected]

Abstract. A lattice Boltzmann model is proposed to asses the impact of variable molecular transport effects on the heat and mass transfer in a horizontal shallow cavity due to natural convection. The formulation includes a generalized form of the Soret and Dufour mass and heat diffusion (cross diffusion) vectors derived from non-equilibrium thermodynamics and fluctuation theory. Both the individual cross diffusion effect and combined effects on transport phenomena are considered. Results from numerical simulations indicate that Soret mass flux and Dufour energy flux have appreciable effect and sometimes are significant. At the same time, the lattice Boltzmann model has been proved to be adequate to describe higher order effects on energy and mass transfer. Keywords: lattice Boltzmann model; Soret effect; Dufour effect; Natural convection.

1

Introduction

Transport phenomena and thermo-physical property in fluid convection submitted to the influence of thermal and concentration horizontal gradients arise in many fields of science and engineering. The conservation equations which describe the transport of energy and mass in these fluid systems are well developed [1-3]. The energy flux includes contributions due to a temperature gradient (Fourier heat conduction), concentration gradient(Dufour diffusion) and a term which accounts for the energy transport as a results of each species having different enthalpies (species interdiffusion). The mass flux consists of terms due 

Corresponding author.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 810–817, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Numerical Study of Cross Diffusion Effects on Double Diffusive Convection

811

to a concentration gradient (Fickian diffusion), temperature gradient (Soret diffusion), Pressure gradient (pressure diffusion) and a term which accounts for external forces affecting each species by a different magnitude. However, most studies concerned on transport phenomena only considered the contributions due to Fourier heat conduction and Fickian diffusion. The Soret mass flux and Dufour energy flux become significant when the thermal diffusion factor and the temperature and concentration gradients are large. Actually, Rosner [4] has stressed that Soret diffusion is significant in several important engineering applications. Similarly, Atimtay and Gill [5] have shown Soret and Dufour diffusion to be appreciable for convection on a rotating disc. An error as high as 30% for the wall mass flux is introduced when the Soret effect is not accounted for. Of particular interest, crystal growth from the vapor is sometimes carried out under conditions conductive to Soret and Dufour effects. As greater demands are made for tighter control of industrial processes, such as in microelctromechanical systems (MEMS) [6], second-order effects such as Soret and Dufour diffusion may have to be included. In regard to actual energy and mass transport in double diffusive convection fluid systems which include Soret and Dufour cross diffusion effect, little, if any, complete and detailed work has been done. However, a few studies have considered the convection, within a vertical cavity, induced by Soret effects. The first study on this topic is due to Bergman and Srinivasan [7]. Their numerical results indicate that the Soret-induced buoyancy effects are more important when convection is relatively weak. The particular case of a square cavity under the influence of thermal and solutal buoyancy forces, which are opposing and of equal intensity, has been investigated by Traore and Mojtabi [8]. The Soret effect on the flow structures was investigated numerically. The same problem was considered by Krishnan [9] and Gobin and Bennacer [10] for the case of an infinite vertical fluid layer. The critical Rayleigh number for the onset of motion was determined by these authors. More recently, Ouriemi et al. [11] considered the case of a shallow layer of a binary fluid submitted to the influence of thermal and concentration horizontal gradients with Soret effects. Moreover, few studies have been concerned with the Soret and Dufour effects at the same time. Weaver and Viskanta numerically simulates these effects in a cavity [12]. Malashetty et al.[13] have performed an stability analysis of this problem. Because of the limited number of studies available, the knowledge concerning the influence of these effects on the heat and mass transfer and fluid flow is incomplete. The lattice Boltzmann method (LBM) is a new method for simulating fluid flow and modeling physics in fluids. It has shown its power and advantages in wide range of situations for its mesoscopic nature and the distinctive computational features [14,15,16]. Contrary to the conventional numerical methods, the LBM has many special advantages in simulating physical problems. The objective of the present study is to develop an effective lattice Boltzmann model to examine the influences and the contributions of the Soret and Dufour effects on the natural convection with simultaneous heat and mass transfer across a horizontal shallow cavity. Based on the idea of the double distribution function

812

X. Yu, Z. Guo, and B. Shi

models for a fluid flow involving heat transfer, a generalized lattice Boltzmann model is proposed to solve the control equations with Soret and Dufour effects.

2

The Lattice Boltzmann Method for the Formulation of the Problem

The configuration considered in this study is a horizontal shallow cavity, of width H and length L. The top and bottom end walls were assumed to be adiabatic and impermeable to heat and mass transfer while the boundary conditions along the right and left side walls were subjected by Dirichlet conditions. Considering the Soret effects and Dufour effects, the dimensionless incompressible fluid equations for the conservation of mass, momentum, solutal concentration, and temperature, with the inclusion of the Boussinesq approximation for the density variation, are written as ∇ · u = 0, P r 1/2 2 ∂u + u · ∇(u) = −∇p + ( ) ∇ u + P r(T + ϕC), ∂t Ra 1 ∂T + u · ∇T = √ (∇2 T + DCT ∇2 C), ∂t Rapr 1 ∂C √ + u · ∇C = (∇2 C + ST C ∇2 T ) ∂t Le Rapr

(1) (2) (3) (4)

along with the boundary conditions where ρ and u are the fluid density and velocity, respectively. T is the temperature of the fluid. The dimensionless variables are defined as √ X p t Ra u T − T0 √ , p= 2 2 X= , t= 2 , u= , T = , H H /α (α /H )Ra ΔT (α/H) Ra K α gβT ΔT H 3 βC ΔC C − C0 ν , Da = 2 , P r = , Le = , Ra = , ϕ= C= ΔC H α D να βT ΔT Here,ν, α, D are the kinematic viscosity, thermal diffusivity and diffusion coefficient,respectively. The remaining notation is conventional. From the above equations it is observed that the present problem is governed by the thermal Rayleigh number Ra, buoyancy ratio ϕ, Lewis number Le, Prandtl number P r, Dofour factor DCT and Soret factor ST C . In this paper, we concerned on the effect of Soret and Dufour factor. So other dimensionless variables will be kept at certain values. Based on the idea of TLBGK model in [17], we take the temperature and concentration as passive scalar. Then the control equations of velocity, temperature and concentration can be treated individually. The evolution equation for the velocity field is similar to Ref. [18] fi (x + ci Δt, t + Δt) − fi (x, t) = − τ1 (fi (x, t) − fieq (x, t)) +ΔtFi (x, t) i = 0, . . . , b − 1,

(5)

Numerical Study of Cross Diffusion Effects on Double Diffusive Convection

813

where ci is the discrete particle velocity, Fi represents the force term, fi (x, t) is the distribution function (DF) for the particle with velocity ci at position x and time t, Δt is the time increment. τ is the nondimensional relaxation time and (eq) is the equilibrium distribution function (EDF). The EDF must be defined fi appropriately such that the mass and momentum are conserved and some symmetry requirements are satisfied in order to describe the correct hydrodynamics of the fluid. In the DnQb[19] models, the EDF is now defined as fi (eq) = αi p + ωi [

Fi =

ci · u uu : (ci ci − c2s I) + . c2s 2c4s

(δi2 + δi4 ) ci P r(T + ϕC) 2c2

(6)

(7)

where ωi is the weight and cs is the sound speed. Both ωi and cs depend on the underlying lattice. Take the D2Q9 model for example, the discrete velocities are given by c0 = 0, and√ci = λi (cos θi , sin θi )c with λi = 1, θi = (i − 1)π/2√for i = 1 − 4, and λi = 2, θi = (i − 5)π/2 + π/4 for i = 5 − 8, and cs = c/ 3. ω0 = 4/9 and α0 = −4σ/c2 , ωi = 1/9 and αi = λ/c2 for (i = 1, 2, 3, 4), ωi = 1/36 5 1 , λ = 13 , γ = 12 is set and αi = γ/c2 for (i = 5, 6, 7, 8), respectively. Here, σ = 12 [20] with best computation effects. The fluid velocity u and pressure p are defined by the DF fi u=

 i

c i fi =



(eq)

c i fi

,

p=

i

c2  2 |u|2 [ fi − ]. 4σ 3 c2

(8)

i=0

Through the Chapman-Enskog expansion, the Eqs.(1),(2) can be obtained. The P r 1/2 kinetic viscosity are given by ( Ra ) = (2τ6−1) c2 Δt. Similarly, the LBGK equations for temperature and concentration fields with D2Q5 lattice are written as follows, (0)

(9)

(0)

(10)

Ti (x + ci Δt, t + Δt) − Ti (x, t) = − τ1T [Ti (x, t) − Ti (x, t)] Si (x + ci Δt, t + Δt) − Si (x, t) = − τ1S [Si (x, t) − Si (x, t)]

where Ti and Si are the DF for the temperature field and concentration field, respectively. τT and τS are relaxation time corresponding to temperature DF and concentration DF trend to equilibrium. Just like the thoughts of constructing generalized LBGK model for Burgers equation [21], the EDF of temperature and concentration are defined as (0) 0) , Ti = T4 [1 − d0 + 2 cci ·2u ] + DST S(1−d 4 (0) i = 0. T0 = T d0 − DST S(1 − d0 ), (0) Si = S4 [(1 − l0 )φ + 2 cci ·2u ] + (0) S0 = Sl0 φ − ST S T (1 − l0 )φ,

ST S T (1−l0 )φ , 4

i = 0.

i = 0,

i = 0,

(11)

(12)

814

X. Yu, Z. Guo, and B. Shi

The fluid temperature and concentration can be expressed by DFs, T =

 i

Ti =

 i

(eq)

Ti

, S=



Si =

i

 i

(eq)

Si

.

(13)

It can be proved that the macroscopic temperature and concentration diffusion equations (3) and (4) can be recovered from LBE(9),(10). The corresponding thermal conduction coefficient, diffusion factor are c2 1 1 √ = (τT − )Δt(1 − d0 ), 2 2 RaP r 1 1 c2 √ = (τT − )Δt(1 − d0 ). 2 2 Le RaP r

(14) (15)

Here, d0 and l0 are adjustable parameters. Obviously, the lattice Boltzmann model can simulate the double diffusive not only including Fourier heat conduction and Fickian diffusion but also second-order cross diffusion by choosing adequate parameters.

3

Numerical Results and Discussion

Using the lattice Boltzmann model, we simulate the Soret and Dufour effects in double diffusive due to temperature and concentration gradient. Since the Soret and Dufour effects are diffusive processes, the Rayleigh number should not too high in order to decrease the advective flux relative to the diffusive flux. Furthermore, the buoyancy ratio is set to be ϕ = 1, which preserves the buoyancy forces induced by the thermal and solutal effects are of equal intensity. Simulation tests are conducted with varied Soret and Dufour factor and fixed Le and P r number equal to 1 and 0.71, respectively. The lattice size is 128 × 256 with Ra equal to 103 . Streamlines, isothermals and iso-concentration profiles varied with Dufour and Soret coefficients are shown in Fig. 1. The effect on mass, momentum and concentration transports made by varied cross diffusion coefficients can be observed obviously. The center velocity, local (averaged) Nusselt and Sherwood number on the left wall are showed in Fig. 2, Fig. 3 and Table 1 for detailed discussion, respectively. A positive (negative) value of Dufour and Soret coefficients decreases (increases) the value of the maximum velocity, and extends (reduces) the range of the velocity field. Moreover, Positive values for the Dufour and Soret coefficients yield solute concentrations below that of the free stream and so is the temperature, the opposite being also true. But when the Dufour coefficient is certain, the local Nusselt numbers decrease with Soret coefficients but local Sherwood number increase. While the Soret coefficient is certain, the local Nusselt numbers increse and the Sherwood number decrease instead. All curves show the expected behavior.

Numerical Study of Cross Diffusion Effects on Double Diffusive Convection

815

1 0.9 0.8

120

120

100

100

80

80

60

60

40

40

0.7 0.6 0.5 0.4 0.3 0.2 20

20

0.1 0

0

0.2

0.4

0.6

0.8

1

50

100

150

200

250

50

100

150

200

250

50

100

150

200

250

1 0.9 0.8

120

120

100

100

80

80

60

60

40

40

0.7 0.6 0.5 0.4 0.3 0.2 20

20

0.1 0

0

0.2

0.4

0.6

0.8

1

50

100

150

200

250

1 0.9 0.8

120

120

100

100

80

80

60

60

40

40

0.7 0.6 0.5 0.4 0.3 0.2 20

20

0.1 0

0

0.2

0.4

0.6

0.8

1

50

100

150

200

250

50

100

150

200

250

Fig. 1. Streamline, isothermals and iso-concentrations from left to right. From top to bottom, the Dufour and Soret coefficients are (0, 0),(0,0.5),(-0.5,-0.5),(-0.9,-0.9), respectively.

140

0.2

120

0.15 0.1

100

a 0.05 v

y

80

b

c d

0

60 −0.05 40 d

c

b

a −0.1

20

0 −0.2

−0.15

−0.15

−0.1

−0.05

0 u

0.05

0.1

0.15

0.2

−0.2

0

50

100

150 x

200

250

300

Fig. 2. The velocity components, u and v, along the vertical and horizontal lines through the center at different cross diffusion numbers (DCT , ST C ): a=(0, −0.5); b=(0, 0.0); c=(0, 0.5); d=(0, 0.9) Table 1. Averaged Nusselt and Sherwood number on the left wall veried with Soret and Dufour factors (D, S) (0,-0.5) (0,0) (0,0.5) (0,0.9) -0.5,0) (0.5,0) (0.9,0) (-0.5,-0.5) (0.5,0.5) (0.9,0.9) N u 0.9570 0.7698 0.5608 0.3785 0.7465 0.7965 0.8200 1.0838 0.6532 0.6053 Sh 0.7465 0.7698 0.7965 0.8200 0.9570 0.5608 0.3785 1.0838 0.6532 0.6053

816

X. Yu, Z. Guo, and B. Shi 1

1 STC=−0.5

0.9

S

=0

S

=0.5

1 STC=−0.5

0.9

TC

0.8

=0

S

=0.5

0.6

0.6

0.5

0.5

0.5

y

0.6 y

0.7

0.4

0.4

0.4

0.3

0.3

0.3

0.2

0.2

0.2

0.1

0.1 0

0.5 Nu

1

L

1.5

D

=S

CT

=0.9

TC

STC=0.9 0.7

−0.5

DCT=STC=0.5

0.8

0.7

0 −1

DCT=STC=−0.5

0.9

TC

STC=0.9

y

S

TC

0.8

TC

2

0 −1

0.1 −0.5

0

0.5 Sh

L

1

1.5

0 −1

−0.5

0

0.5 Sh

1

1.5

2

L

Fig. 3. Local Nusselt and Sherwood number on the left wall varied with cross diffusion numbers (The DCT is equal 0 at the left two figures)

4

Summary

In this paper,a lattice Boltzmann model is proposed and used to asses the Soret and Dufour effects on the heat and mass transfer in a horizontal shallow cavity due to natural convection. The proposed model is constructed in the double distribution function framework. Cross diffusion effects are studied under the buoyancy counteracting flows and augmenting flows in shallow cavity with the lattice Boltzmann model. The numerical results show that Soret and Dufour effects have contributions to mass and energy flux. Sometimes, it may appears to significant. Simulation results agree well with previous work. Moreover, it indicates that the lattice Boltzmann model is adequate to simulate molecular transport which even includes second-order effects. It seems that the lattice Boltzmann method is effective in describing higher order physical effects, which are important as the industry develops. Acknowledgments. This work is supported by the National Basic Research Program of China (Grant No. 2006CB705804) and the National Science Foundation of China (Grant No. 50606012).

References 1. Hirshfelder J. O., Curtiss C. F., Bird R. B.: Molecular Theory of Gases and Liquids. Wiley, New York (1960) 2. Groot S. R., Mazur P.: Thermodynamics of Trreversible Processes. Dover, New York (1984) 3. Chapman S., Cowling T. G.: The Mathematical Theory of Non-Unifrom Gases. 3rd edn. Cambridge University, Cambridge (1970) 4. Rosner D. E.: Thermal (Soret) Diffusion Effects on Interfacial Mass Transport Rates. PhysicoChem. Hydordyn. 1 (1980) 159–185 5. Atimtay A. T., Gill W. N.: The Effect of Free Stream Concentration on Heat and Binary Mass Transfer with Thermodynamic Coupling in Convection on a Rotating Disc. Chem. Engng Commun. 34 (1985) 161–185 6. Karniadakis G. E., Beskok A.: Micro flows, Fundamentals and Simulation. Springer, Nw York (2001) 7. Bergman T. L., Srinivasan R.: Numerical Simulation of Soret-induced Double Diffusion in an Initiallly Uniform Concentration Binary Fluid. Int. J. Heat Mass Transfer 32 (1989) 679–687

Numerical Study of Cross Diffusion Effects on Double Diffusive Convection

817

8. Traore Ph., Mojtabi A.: Analyse de l’effect soret en convection thermosolutale, Entropie 184/185 (1989) 32–37 9. Krishnan R.: A Numerical Study of the Instability of Double-Diffusive Convection in a Square Enclosure with Horizontal Temperature and Concentration Gradients, Heat transfer in convective flows. HTD ASME National Heat Transfer conference, vol 107. Philadelphia (1989) 10. Gobin D., Bennacer R.: Double-diffusion Convection in a Vertical Fluid Layer: Onset of the Convection Regime. Phys. Fluids 6 (1994) 59–67 11. Ouriemi M., Vasseur P., Bahloul A., Robillard L.: Natural Convection in a Horizontal Layer of a Binary Mixture. Int. J. Thermal Sciences 45 (2006) 752–759 12. Weaver J. A., Viskanta R.: Natural Convection Due to Horizontal Temperature and Concentration Gradients -2. Species Interdiffusion, Soret and Dufour Effects. Int. J. Heat Mass Transfer 34 (1991) 3121–3133 13. Malashetty M. S., Gaikward S. N., Effects of Cross Diffusion on Double Diffusive Convection in the Presence of Horizontal Gradients. Int. J. Engineering Science 40 (2002) 773–787 14. Benzi R., Succi S., Vergassola M.: The Lattice Boltzmann Equation: Theory and Applications. Phys. Report. 222 (1992) 145–197 15. Qian Y. H., Succi S., Orszag S.: Recent Advances in Lattice Boltzmann computing. Annu. Rev. Comp. Phys. 3 (1995) 195–242 16. Chen S. Y., Doolen G.: Lattice Boltzmann Method for Fluid Flows. Annu. Rev. Fluid Mech. 30 (1998) 329–364 17. Guo Z. L., Shi B. C., Zheng C. G.: A Coupled Lattive BGK Model for the Bouessinesq Equation. Int. J. Num. Meth. Fluids 39 (2002) 325–342 18. Deng B., Shi B. C., Wang G. C: A New Lattice-Bhatnagar-Gross-Krook Model for the Convection-Diffusion Equation with a Source Term. Chinese Phys. Lett. 22 (2005) 267–270 19. Qian Y. H., D’Humi`eres D., Lallemand P.: Lattice BGK Models for Navier-Stokes Equation. Europhys. Lett. 17 (1992) 479–484 20. Guo Z. L., Shi B. C., Wang N. C., Lattice BGK model for incompressible NavierStokes Equation. J. Comput. Phys. 165 (2000) 288–306 21. Yu X. M., Shi B. C.: A Lattice Bhatnagar-Gross-Krook model for a class of the generalized Burgers equations. Chin. Phys. Soc. 15 (2006) 1441–1449

Lattice Boltzmann Simulation of Some Nonlinear Complex Equations Baochang Shi Department of Mathematics, Huazhong University of Science and Technology, Wuhan 430074, PR China [email protected] Abstract. In this paper, the lattice Boltzmann method for convectiondiffusion equation with source term is applied directly to solve some important nonlinear complex equations, including nonlinear Schr¨ odinger (NLS) equation, coupled NLS equations, Klein-Gordon equation and coupled Klein-Gordon-Schr¨ odinger equations, by using complex-valued distribution function and relaxation time. Detailed simulations of these equations are carried out. Numerical results agree well with the analytical solutions, which show that the lattice Boltzmann method is an effective numerical solver for complex nonlinear systems. Keywords: Lattice Boltzmann method, nonlinear Schr¨ odinger equation, Klein-Gordon equation, Klein-Gordon-Schr¨ odinger equations.

1

Introduction

The lattice Boltzmann method (LBM) is an innovative computational fluid dynamics (CFD) approach for simulating fluid flows and modeling complex physics in fluids [1]. Compared with the conventional CFD approach, the LBM is easy for programming, intrinsically parallel, and it is also easy to incorporate complicated boundary conditions such as those in porous media. The LBM also shows potentials to simulate the nonlinear systems, including reactiondiffusion equation [2,3,4], convection-diffusion equation [5,6], Burgers equation [7] and wave equation [3,8], etc. Recently, a generic LB model for advection and anisotropic dispersion equation was proposed [9]. However, almost all of the existing LB models are used for real nonlinear systems. Beginning in the mid 1990s, based on quantum-computing ideas, several types of quantum lattice gases have been studied to model some real/complex mathematical-physical equations, such as Dirac equation, Schr¨ odinger equation, Burgers equation and KdV equation [10,11,12,13,14,15,16,17]. Although these work are not in the classical LBM framework, they bring us an interesting problem: how does the classical LBM work when used to model complex equations? Very recently, Linhao Zhong, Shide Feng, Ping Dong, et al. [18] applied the LBM to solve one-dimensional nonlinear Schr¨ odinger (NLS) equation using the idea of quantum lattice-gas model [13,14] for treating the reaction term. Detailed simulation results in Ref.[18] have shown that the LB schemes proposed have accuracy that is better than or at least comparable to the Crank-Nicolson finite difference scheme. Therefore, it is necessary to study the LBM for nonlinear complex equations further. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 818–825, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Lattice Boltzmann Simulation of Some Nonlinear Complex Equations

819

In this paper, using the idea of adopting complex-valued distribution function and relaxation time [18], the LBM for n-dimensional (nD) convection-diffusion equation (CDE) with source term is applied directly to solve some important nonlinear complex equations, including nonlinear Schr¨ odinger (NLS) equation, coupled NLS equations, Klein-Gordon (KG) equation and coupled Klein-GordonSchr¨ odinger (CKGS) equations. Detailed simulations of these equations are carried out for accuracy test. Numerical results agree well with the analytical solutions, which show that the LBM is also an effective numerical solver for complex nonlinear systems.

2

Lattice Boltzmann Model

The nD CDE with source term considered in this paper can be written as ∂t φ + ∇ · (φu) = α∇2 φ + F (x, t),

(1)

where ∇ is the gradient operator with respect to the spatial coordinate x in n dimensions. φ is a scalar function of time t and position x. u is a constant velocity vector. F(x,t) is the source term. When u=0, Eq.(1) becomes the diffusion equation (DE) with source term, and several of such equations form a reaction-diffusion system (RDS). 2.1

LB Model for CDE

The LB model for Eq.(1) is based on the DnQb lattice [1] with b velocity directions in nD space. The evolution equation of the distribution function in the model reads 1 fj (x+cj Δt, t+Δt)−fj (x, t) = − (fj (x, t)−fjeq (x, t))+ΔtFj (x, t), j = 0, . . . , b−1, τ (2) where {cj , j = 0, . . . , b − 1} is the set of discrete velocity directions, Δx and Δt are the lattice spacing and the time step, respectively, c = Δx/Δt is the particle speed, τ is the dimensionless relaxation time, and fieq (x, t) is the equilibrium distribution function which is determined by fjeq (x, t) = ωj φ(1 + such that  j

fj =

 j

fjeq = φ,

 j

cj · u (uu) : (cj cj − c2s I) + ) c2s 2c4s

cj fjeq = φu,

 j

cj cj fjeq = c2s φI + φuu

(3)

(4)

with α = c2s (τ − 12 )Δt, where I is the unit tensor, ωj are weights and cs , so called sound speed in the LBM for fluids, is related to c and ωj . They depend on the lattice model used. For the D1Q3 model, ω0 = 2/3, ω1∼2 = 1/6, for the D2Q9 one, ω0 = 4/9, ω1∼4 = 1/9, ω5∼8 = 1/36, then c2s = c2 /3 for both of them.

820

B. Shi

Fj in Eq.(2), corresponding to the source term in Eq.(1), is taken as cj · u Fj = ωj F (1 + λ 2 ) (5) cs   such that j Fj = F, j cj Fj = λF u, where λ is a parameter which is set to τ−1

be τ 2 in the paper. It is found that the LB model using this setting has better numerical accuracy and stability when u = 0. The macroscopic equation (1) can be derived though the Chapman-Enskog expansion (See Appendix for details). It should be noted that the equilibrium distribution function for above LBM was often used in linear form of u, which is different from that in Eq.(3). However, from the appendix we can see that some additional terms in the recovered macroscopic equation can be eliminated if Eq.(3) is used. Moreover, if u is not constant, the appropriate assumption on it is needed in order to remove the additional term(s) when recovering Eq.(1). 2.2

Version of LB Model for Complex CDE

Almost all of the existing LB models simulate the real evolutionary equations. However, from the Chapman-Enskog analysis, we can find that the functions in CDE and related distribution function can be both real and complex without affecting the results. In general, for the complex evolutionary equations, let us decompose the related complex functions and relaxation time into their real and imaginary parts by writing 1 = w1 + iw2 , τ

(6)

gj (x + cj Δt, t + Δt) − gj (x, t) = − w1 (gj (x, t) − gjeq (x, t)) +w2 (hj (x, t) − heq j (x, t)) + ΔtGj (x, t), hj (x + cj Δt, t + Δt) − hj (x, t) = − w2 (gj (x, t) − gjeq (x, t)) −w1 (hj (x, t) − heq j (x, t)) + ΔtHj (x, t), j = 0, . . . , b − 1.

(7)

fj = gj + ihj , fjeq = gjeq + iheq j , Fj = Gj + iHj , w = where i2 = −1. Now we can rewrite Eq.(2) as

Eq.(7) is the implemental version of the LB model proposed for complex CDE. It should be noted that Eq.(7) reflects the coupling effect of real and imaginary parts of unknown function in complex CDE through the complex-valued relaxation time in a natural way.

3

Simulation Results

To test the LB model proposed above, numerical simulations of some CDEs with source term are carried out. Here we select three types of complex nonlinear evolutionary equations with analysis solutions to test mainly the numerical order of accuracy of the LBM. In simulations, the initial value of distribution function is taken as that of its equilibrium part at time t = 0, which is a commonly used

Lattice Boltzmann Simulation of Some Nonlinear Complex Equations

821

strategy, and Δx2 /Δt(= Δx × c) is set to be constant for different grids due to some claims that LB schemes are second-order accurate in space and first-order accurate in time. Example 3.1. We show an accuracy test for the 2D NLS equation iut + uxx + uyy + β|u|2 u = 0,

(8)

which admits a progressive plane wave solution [19] u(x, y, t) = A exp(i(c1 x + c2 y − ωt)),

(9)

where ω = c21 + c22 − β|A|2 ; A, c1 and c2 are constants. In simulations, a D2Q9 LB model is used. We set A = c1 = c2 = 1, β = 2 and use the periodic boundary condition is used in [0, 2π] × [0, 2π] as in Ref.[19], while the initial condition for u is determined by the analytical solution (9) at t = 0. The global relative L1 error and the numerical order of accuracy are contained in Table 1 at time t = 1. From the table we can see that the LBM for the NLS equation (8) has about second-order of accuracy, but the order of accuracy decreases as the resolution increases. Table 1. Accuracy test for the NLS equation (8) Grid N ×N 62 × 62 124 × 124 248 × 248 496 × 496

c 50 100 200 400

Real part of u Imaginary part of u Order L1 err Order L1 err 2.56e-2 2.57e-2 5.71e-3 2.1646 5.71e-3 2.1702 1.34e-3 2.0913 1.34e-3 2.0913 4.15e-4 1.6910 4.15e-4 1.6910

Example 3.2. We show an accuracy test for the 1D coupled NLS equation iut + iαux + 12 uxx + (|u|2 + β|v|2 )u = 0, ivt − iαvx + 12 vxx + (β|u|2 + |v|2 )v = 0,

(10)

with the soliton solutions [19] √ u(x, t) = Asech(√2a(x − γt)) exp(i((γ − α)x − ωt)), v(x, t) = Asech( 2a(x − γt)) exp(i((γ + α)x − ωt)),  2 2 a where A = 1+β , ω = γ −α − a; a, γ, α and β are real constants. 2

(11)

We can use two LB evolutionary equations based on D1Q3 lattice to simulate Eq.(10). We set a = 1, γ = 1, α = 0.5, β = 2/3 and use the periodic boundary condition in [−25, 25] as in Ref.[19], while the initial condition is determined by the analytical solution (11) at t = 0. The global relative L1 errors are plotted in Fig. 1 (left) at time t = 1 for Δx = 1/10 to 1/320, and c = 20 to 640. It is found that the LBM for the coupled NLS equations (10) has the second-order

822

B. Shi

of accuracy, and the order of accuracy is nearly fixed to 2.0 for different grid resolution. To test the LBM further the error evolution with time is also plotted in Fig.1 (right) for Δx = 1/100, and Δt = 10−4 and 10−5 , respectively. The errors for Δt = 10−4 are about 7.0 to 9.0 times those for Δt = 10−5 . −1

0.012 Real part of u Imaginary part of u

−1.5 −2

ψ 0.008

err

−2.5

log err

Real part of u Imaginary part of u Real part of u Imaginary part of u

0.01

φ

−3 −3.5

0.006

0.004

dt=1e−4, c=100 dt=1e−5, c=1000

−4 0.002

−4.5 −5 1

1.5

2

2.5

3

0 1

2

3

4

5

−log dx

6

7

8

9

10

t

Fig. 1. Left: Global relative errors vs. space steps at t = 1.0; Right: Error evolves with time

Example 3.3. Consider the 1D coupled KGS equations iψt + 12 ψxx + ψφ = 0, φtt − φxx + φ − |ψ|2 = 0,

(12)

with the soliton solutions [20] ψ(x, t) = φ(x, t) =

√ √3 2 sech2 √ 1 (x − vt − x0 ) exp(i(vx 4 1−v 2 2 1−v 2 3 2 √1 sech (x − vt − x0 ), 2 4(1−v ) 2 1−v 2

+

1−v 2 +v 4 2(1−v 2 ) t)),

(13)

where v is the propagating velocity of wave and x0 is the initial phase. Note that the KG equation in Eq.(12) is different from RD equation, the special case of Eq.(1), due to the second time differential of φ. In order to solve this equation by LBM, we must modify the LB model in section 2. Consider the nD KG equation φtt − α∇2 φ + V (φ) = 0,

(14)

in the spacial region Ω, where V (φ) is some nonlinear function of φ. The the initial conditions associated with Eq.(14) are given by φ(x, 0) = f (x), φt (x, 0) = g(x),

(15)

where the boundary conditions can be found according to the given problem. Using the idea in Ref.[8], we modify Eq.(3) as follows fjeq (x, t) = ωj (φt + = ωj (φt +

c2s (φ−φt )I:(cj cj −c2s I) ) 2c4s (φ−φt )(c2j −nc2s ) ), 2c2s

(16)

Lattice Boltzmann Simulation of Some Nonlinear Complex Equations

823

such that  j

fj =

 j

fjeq = φt ,

 j

cj fjeq = 0,

 j

cj cj fjeq = c2s φI

(17)

with α = c2s (τ − 12 )Δt. We use the  difference scheme for computing φt to obtain φ. For instance, φ(x, t + Δt) = Δt j fj (x, t + Δt) + φ(x, t). Now we can use two LB evolutionary equations based on D1Q3 lattice to simulate Eq.(12), one for NLS equation and the other for KG equation. In simulation, the initial and boundary conditions are determined by the analytical solution (13) in [−10, 10]. The non-equilibrium extrapolation scheme [21] is used for treating the boundary conditions. The global relative L1 errors are also plotted in Fig. 1 (left) for v = 0.8, x0 = 0 at time t = 1 for Δx = 1/10 to 1/640, and c = 10 to 640. It is found that the LBM for the coupled KGS equations (12) has also the second-order of accuracy, which is comparable to the multisymplectic scheme in Ref.[20]. We also find that the order of accuracy for φ increases from 1.7308 to 2.0033 as the grid resolution increases. This may be because that we use the first-order difference scheme to compute φt .

4

Conclusion

In this paper the LBM for nD CDE with source term has been applied directly to solve some important nonlinear complex equations by using complex-valued distribution function and relaxation time, and the LB model for nD KG equation is derived by modifying the method for CDE. Simulations of 2D NLS equation, 1D coupled NLS equations and 1D coupled KGS equations are carried out for accuracy test. Numerical results agree well with the analytical solutions and the second-order accuracy of the LBM is confirmed. We found that to attain better accuracy the LBM for the test problems requires a relatively small time step Δt and Δt = 10−4 is a proper choice. Since the Chapman-Enskog analysis shows that this kind of complex-valued LBM is only the direct translation of the classical LBM in complex-valued function, the LBM can be applied directly to other complex evolutionary equations or real ones with complex-valued solutions. Although the preliminary work in this paper shows that the classical LBM has also potentials to simulate complex-valued nonlinear systems, some problems still need to be solved, such as how to improve the accuracy and efficiency of complex LBM and that how does the complex LBM for CDE work when u is not constant.

References 1. Qian, Y. H., Succi, S., Orszag, S.: Recent advances in lattice Boltzmann computing. Annu. Rev. Comput. Phys. 3 (1995) 195–242 2. Dawson, S. P., Chen S. Y., Doolen,G. D.: Lattice Boltzmann computations for reaction-diffusion equations. J. Chem. Phys. 98 (1993) 1514–1523 3. Chopard, B., Droz, M.: Cellular automata modeling of physical systems. Cambridge University Press, Cambridge (1998)

824

B. Shi

4. Blaak, R., Sloot, P. M.: Lattice dependence of reaction-diffusion in lattice Boltzmann modeling. Comput. Phys. Comm. 129 (2000) 256–266 5. Van der Sman, R. G. M., Ernst, M. H.: Advection-diffusion lattice Boltzmann scheme for irregular lattices. J. Comput. Phys. 160(2) (2000) 766–782 6. Deng, B., Shi, B. C., Wang, G. C.: A new lattice Bhatnagar-Gross-Krook model for convection-diffusion equation with a source term. Chin. Phys. Lett. 22 (2005) 267–270 7. Yu, X. M., Shi, B. C.: A lattice Bhatnagar-Gross-Krook model for a class of the generalized Burgers equations. Chin. Phys. 25(7) (2006) 1441–1449 8. Yan, G. W.: A lattice Boltzmann equation for waves. J. Comput. Phys. (2000) 161(9) (2000) 61–69 9. Ginzburg, I.: Equilibrium-type and link-type lattice Boltzmann models for generic advection and anisotropic-dispersion equation. Advances in Water Resources. 28(11) (2005) 1171–1195 10. Meyer, D. A.: From quantum cellular automata to quantum lattice gas. J. Stat. Phys. 85 (1996) 551–574 11. Succi, S., Benzi, R.: The lattice Boltzmann equation for quantum mechanics. Physica D 69 (1993) 327–332 12. Succi, S.: Numerical solution of the Schr¨ odinger equation using discrete kinetic theory. Phys. Rev. E 53 (1996) 1969–1975 13. Boghosian, B. M., Taylor IV, W.: Quantum lattice gas models for the many-body Schr¨ odinger equation. Int. J. Mod. Phys. C 8 (1997) 705–716 14. Yepez, J., Boghosian, B.: An efficient and accurate quantum lattice-gas model for the many-body Schr¨ odinger wave equation. Comput. Phys. Commun. 146 (2002) 280–294 15. Yepez, J.: Quantum lattice-gas model for the Burgers equation. J. Stat. Phys. 107 (2002) 203–224 16. Vahala, G., Yepez, J., Vahala, L.: Quantum lattice gas representation of some classical solitons. Phys. Lett. A 310 (2003) 187–196 17. Vahala, G., Vahala, L., Yepez, J: Quantum lattice representations for vector solitons in external potentials. Physica A 362 (2006) 215–221 18. Zhong, L. H., Feng, S. D., Dong, P., et al.: Lattice Boltzmann schemes for the nonlinear Schr¨ odinger equation. Phys. Rev. E. 74 (2006) 036704-1–9 19. Xu, Y., Shu, C.-W.: Local discontinuous Galerkin methods for nonlinear Schr¨ odinger equations. J. Comput. Phys. 205 (2005) 72–97 20. Kong, L. H., Liu, R. X., Xu, Z. L.: Numerical solution of interaction between Schr¨ odinger field and Klein-Gordon field by multisymplectic method. Appl. Math. Comput. 181 (2006) 242–350 21. Guo, Z. L., Zheng, C. G., Shi, B. C.: Non-equilibrium extrapolation method for velocity and pressure boundary conditions in the lattice Boltzmann method. Chin. Phys. 11 (2002) 366–374

Appendix: Derivation of Macroscopic Equation To derive the macroscopic equation (1), the Chapman-Enskog expansion in time and space is applied: (1)

fj = fjeq + fj

(2)

+ 2 fj , F = F (1) , ∂t = ∂t1 + 2 ∂t2 , ∇ = ∇1 ,

where  is the Knudsen number, a small number.

(18)

Lattice Boltzmann Simulation of Some Nonlinear Complex Equations

825

From Eqs(18), (4) and (5), it follows that 

(k)

j

fj

= 0(k ≥ 1),



(1)

j

Fj

= F (1) ,



(1)

j

cj Fj

= λF (1) u,

(19)

c ·u

(1)

where Fj = ωj F (1) (1 + λ cj 2 ) and λ is a parameter specified later. s Applying the Taylor expansion and Eq.(18) to Eq.(2), we have 1 (1) (1) f + Fj , τ Δt j 1 (2) f . =− τ Δt j

O() : D1j fjeq = − (1)

O(2 ) : ∂t2 fjeq + D1j fj

+

Δt 2 eq D f 2 1j j

(20) (21)

where D1j = ∂t1 + cj · ∇1 . Applying Eq.(20) to the left side of Eq.(21), we can rewrite Eq.(21) as ∂t2 fjeq + (1 −

1 Δt 1 (2) (1) (1) )D1j fj + D1j Fj = − f . 2τ 2 τ Δt j

(22)

Summing Eq.(20) and Eq.(22) over j and using Eq.(4) and Eq.(19), we have ∂t1 φ + ∇1 · (φu) = F (1) , (23)

∂t2 φ + (1 −

 1 Δt (1) )∇1 · (∂t1 F (1) + ∇1 · (λF (1) u)) = 0. cj fj + 2τ 2 j

(24)

Since u is a constant vector, using Eqs (20), (4), (19) and (23), we have 

(1)

j

cj fj

 (1) = −τ Δt j cj (D1j fjeq − Fj ) = −τ Δt(∂t1 (φu) + ∇1 · (φuu + c2s φI) − λF (1) u) = −τ Δt(u(∂t1 φ + ∇1 · (φu)) + c2s ∇1 φ − λF (1) u) = −τ Δt(c2s ∇1 φ + (1 − λ)F (1) u).

(25)

Then substituting Eq.(25) into Eq.(24), we obtain ∂t2 φ = α∇21 φ + Δt(τ −

1 Δt − λτ )∇1 · (F (1) u) − ∂t F (1) . 2 2 1

(26)

where α = c2s (τ − 12 )Δt. Therefore, combining Eq.(26) with Eq.(23) and taking λ=

τ − 12 τ

, we have ∂t φ + ∇ · (φu) = α∇2 φ + F −

Δt ∂t1 F. 2

(27)

Neglecting the last term in the right side of Eq.(27), the CDE (1) is recovered. If we use the LB scheme proposed in Ref.[6], this term can be fully eliminated.

A General Long-Time Molecular Dynamics Scheme in Atomistic Systems: Hyperdynamics in Entropy Dominated Systems Xin Zhou and Yi Jiang Los Alamos National Laboratory, Los Alamos, NM 87545, USA [email protected]

Abstract. We extend the hyperdynamics method developed for lowdimensional energy-dominated systems, to simulate slow dynamics in more general atomistic systems. We show that a few functionals of the pair distribution function forms a low-dimensional collective space, which is a good approximation to distinguish stable and transitional conformations. A bias potential that raises the energy in stable regions, where the system is at local equilibrium, is constructed in the pair-correlation space on the fly. Thus a new MD scheme is present to study any time-scale dynamics with atomic details. We examine the slow gas-liquid transition of Lennard-Jones systems and show that this method can generate correct long-time dynamics and focus on the transition conformations without prior knowledge of the systems. We also discuss the application and possible improvement of the method for more complex systems.

1

Introduction

The atomistic molecular dynamics (MD) simulations are typically limited to a time scale of less than a microsecond, so many interesting slow processes in chemistry, physics, biology and materials science cannot be simulated directly. Many new methods, such as kinetic monte carlo [1], transition path ensemble methods [2], minimal action/time methods [3]have been developed to study the slow processes (for a review see [4]). They all require prior knowledge of the system, which is often hard to obtain, and they can only deal with a few special processes inside a small part of configurational space of the system. In many systems, the interesting slow dynamics are governed by the infrequent, fast transitions between (meta-) stable regions; yet the systems spend most of their time in the stable regions, whose dynamics can be well described by some time-average properties. Hence an alternative approach to describing the long-time dynamic trajectory would be some suitable time propagator entering in/out the stable regions as well as the real short transition trajectory among the regions. In other words, we would coarse-grain the stable regions while keeping the needed details outside the stable regions. This natural coarse-graining technique is different from the usual method that average some degrees of freedom. The averaging process in the latter method usually changes the dynamics, although the static properties of systems might remain correct. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 826–833, 2007. c Springer-Verlag Berlin Heidelberg 2007 

A General Long-Time Molecular Dynamics Scheme in Atomistic Systems

827

Hyperdynamics, originally developed by Voter [5],is an example of such a coarse-graining method. The hyperdynamics method treats the potential basins as the stable configurational regions that are separated by saddle surfaces. A bias potential is designed to lift the potential energy of the system in the basins, while keeping the saddle surfaces intact. Dynamics on the biased system lead to accelerated evolution from one stable region to another. Based on transition state theory (TST), the realistic waiting time treal before a transition from the basins can be re-produced statistically:  treal = Δt exp(βΔV (ri )), (1) i

where Δt is MD time step, and ΔV (ri ) is the bias potential at the conformation ri , β = 1/kB T , kB is the Boltzmann constant and T is the temperature of the system; r refers to the 3N -dimensional conformation vector throughout this paper. The method has been applied successfully to systems in which the relevant states correspond to deep wells in the potential energy surface (PES), with dividing surfaces at the energy ridge tops separating these states. This is typical of solid-state diffusion systems [6].However, in systems where entropy is not negligible, the basins of PES do not completely correspond to the longtime stable regions. Hyperdynamics cannot readily apply. An extreme example is hard sphere systems: all physical permitted conformations have the same zero potential energy, but some conformations, which correspond to transition regions among stable regions, are rarely visited. A complication occurs even when applying hyperdynamics in solids with fairly clearly defined stable regions: after applying the bias potential, the energy landscape becomes much flatter and the system can start to have entropic-like characteristics. These effects limit the improvement in the simulation rate that can be achieved by the hyperdynamics method over the direct MD approach [5]. Thus, although there have been some attempts [7,8] to apply hyperdynamics to enhance conformational sampling in biological systems, generally, accurate slow dynamics or kinetics can only be expected for relatively simple or low-dimensional systems. Here, we derive a more general hyperdynamics method that can be used to access very long (possibly all) time scale in fluids where both entropy and potential energy are important. A description of part of this method has appeared in [9]. We first present explicit conditions for applying this method without using TST. Then we give some possible collective variables for identifying the stable and transition conformations to design suitable bias potential. We then examine the performance of this method in simple fluids. Finally we discuss the further development and application of this method in more general (and complex) systems.

2

Theory

We begin this process by introducing a time-compressing transformation, dτ = a(r)dt,

(2)

828

X. Zhou and Y. Jiang

where dτ is a pseudo-time step, dt is the real time step, and the local dimensionless compression factor is given by a conformational function a(r), which is ≤ 1. Thus the trajectory r(t) can be rewritten as r(τ ) = r(τ (t)) in a shorter pseudo-time interval τ , and we have, 

t







dt a(r(t )) = t

τ=

drD(r; r(t); t)a(r),

(3)

0

where D(r; r(t); t) is the density probability of r(t) in the conformational space in interval [0, t],  1 t  D(r; r(t); t) = dt δ(r − r(t )) (4) t 0 The compressed trajectory r(τ ) will satisfy a new equation of motion [9]. If we use the usual Langevin equation to simulate the evolution of system in NVT ensemble, the new equation is, d Pi Ri = dτ Mi  d ∂ΔV (r) ∂U (r) Pi = − − ζ  (r)Pi + fi (τ ) − Lij , dτ ∂Ri ∂Rj j

(5)

where Ri is the ith component of conformational vector r, Mi = mi a2 (r) is the pseudo-mass of the particle with the real mass mi . It is an equation of motion of particles with smaller conformation-dependent mass M (r) under new potential U (r) = V (r) + ΔV (r) where ΔV (r) = −kB T ln a(r), as well as new friction coefficient, if we neglect the zero ensemble-average term Lij =

1 Pi Pj − δij , kB T Mj

where δij is the kroneck δ-symbol. Similarly, the zero-mean white noise friction force f (τ ) satisfies the fluctuation-dissipation theorem of the new system, < fi (τ )fj (τ  ) >= 2kB T Mi ζ  δij δ(τ − τ  ).

(6)

where ζ  (r) = ζ/a(r) and ζ is the friction coefficient of the original system. At the first glance, it would not appear to be advantageous over directly generate r(τ ) from (5), as very short time steps are necessary due to the small mass M . However, if we only focus on the long-time dynamics, we can use a smoother pseudo-trajectory R(τ ) to replace r(τ ), provided that the reproduced time from R(τ ) is the same as that from r(τ ). Thus, a sufficient condition to replace r(τ ) with R(τ ) is that their density probability are the same. Actually, if the compressed factor is defined as function of some collective variables, notated as X, rather than that of r, a simpler condition is, D(X; R(τ ); τ ) = D(X; r(τ ); τ ),

(7)

A General Long-Time Molecular Dynamics Scheme in Atomistic Systems

829

where D(X; R(τ ); τ ) and D(X; r(τ ); τ ) are the density probability of the trajectory R(τ ) and r(τ ) in the X space, respectively. In the time-consuming regions (stable regions), similar conformations would be visited repeatedly many times even during a finite simulation time. Thus we can assume the distribution can be approximated by D(r; r(t); t) ∝ exp(−βV (r)). Many methods might be used to generate R(τ ) with the required distribution. One of them is to use a realistic trajectory corresponding to the local equilibrium of a new potential U (r) = V (r) − kB T ln a(r). Actually, if we select a(r) < 1 only in the potential wells, we effectively have the hyperdynamics presented by Voter [5]. In many cases, some collective variables X can be used to identify transition conformations. The transition conformations related to slow dynamics always locate in some regions of X space, so we can select a(r) = a(X(r)) < 1 outside these regions that involve transition conformations, thus the new potential is U (r) = V (r) − kB T ln a(X(r)). For example, in polymers, the transition regions of slow conformational transitions can be identified in the soft torsion angles space. We require that the density probability of pseudo-trajectory in the collective-variable space equals to that of the compressed trajectory. Thus, if we use a bias potential to generate the pseudo-trajectory, a simple design of the bias potential is ΔV (r) = kB T f+ (ln(D(X(r))/Dc )),

(8)

where f+ (y) = yΘ(y), Θ(y) is the step function. D(X) is the density probability of a segment of trajectory, and Dc is a pre-selected threshold value. The design means we compress the trajectory to make the density probability reach Dc if the original density is larger than the value. We can repeat the biasing process: simulating a segment of trajectory to get a bias potential, then simulating another segment of trajectory under the designed bias potentials to get another bias potential. The sum of all bias potentials forms the total bias potential to reach (almost) any time scale. In practice, we also add a correction the definition of f+ (y) near y = 0 to get continuous bias force. The key to apply hyperdynamics successfully is to design suitable bias potentials ΔV (r). Obviously, ΔV (r) should have the same symmetry as V (r). For a simple case of N identical particles, the conformational vector {Rj } (j = 1, ···, N )  can be rewritten as a density field, ρˆ(x) = j δ(x − Rj ). Here both x and R are the normal 3-dimension spatial vectors. Since the neighboring conformations are equivalent in the viewpoint of slow dynamics, ρˆ(x) can be averaged to get a smooth function, ρ¯(x), for example, by using a Gaussian function to replace the Dirac-δ function. If the width of the Gaussian function is small, ρ¯(x) can be used to identify different  conformations. Another similar description is the k−space density field, ρˆ(k) = i exp(ik · Ri ). If neglecting the effects of multi-body correlations and directional correlations, we can approximate the density field ρ(x) ¯ to the radial pair distribution function g(z), which is defined as,  1 δ(rij − z), (9) g(z) = 4πρz 2 N i j=i

where rij = |Ri − Rj |, or more exactly, some bin-average values of g(z) along z,

830

X. Zhou and Y. Jiang

 g(z)hp (z)z 2 dz,

gp = 2πρ

(10)

Where hp (z) is a two-step function, unity in a special z range (ap , bp ) and zero otherwise. Thus, each conformation corresponds to a group of gp (g vector), the spatial neighborhood of the conformation and their symmetric companions also correspond to the same g vector. If we select enough gp , all conformations with the same {gp } are thought to be identical in the viewpoint of slow dynamics. Therefore, we can define the bias potential in the low-dimension g space, ΔV (RN ) = ΔV ({gp (RN )}). To better identify conformations with not-toosmall bin size, we can add some important dynamics-related physical variables, such as, potential V (RN ), to the collective variable group. Another important variable is the two-body entropy s2 , presented first by H. S. Green [10], defined as,  (11) s2 = −2πρ [g(z) ln g(z) − g(z) + 1]z 2 dz. The two-body entropy, which forms the main part (85% − 95%) of macroscopic excess entropy, has been studied widely [11,12]. Actually, both gp and s2 are functional of g(z), similarly, it is also possible to use some another functional of g(z) to form the collective variables. In some special systems, it may be also useful to add some special order parameters Oq to take into account possible multi-body correlations. Finally, we have a group (of order 10) of general collective variables X = {X p }, which might involve V , s2 , some {gp } and some possible {Oq }, to identify conformations and form an appropriate bias potential ΔV (X(RN )) = kB T f+ (ln D(X)/Dc ). The corresponded bias force on each particle can be calculated by the chain rule of differentiation,  ∂ΔV ∂X p . (12) Δfi = − ∂X p ∂Ri p For example, from the (9), we have,  ∂ ∂g(z) 1 = rˆkj δ(rkj − z) 2 ∂Rk 2πρz N ∂z

(13)

j=k

where rˆkj is the corresponding unit vector of the distance vector rkj = Rj − Rk , and rkj = rjk is the length of rkj . Thus, for any functional of g(z), for example, s2 and gp , we can easily calculate its derivative, and hence the bias force. In practice, in order to get continuous bias forces, we replace the Dirac-δ function by some smooth functions, for example, Epanechnikov kernel function, 3 z2 Ke (z) = √ (1 − 2 ), 5 4 5

(14)

√ √ if − 5 ≤ z ≤ 5 , otherwise, Ke (z) = 0. While → 0, the Ke (z) → δ(z). Under the replacement, we redefine    zp +δ/2 1 Ke (z − rij )dz, (15) gp = 4πρδN zp2 i zp −δ/2 j=i

A General Long-Time Molecular Dynamics Scheme in Atomistic Systems

831

where δ and zp is the size and the center of the pth bin, respectively. It is easy to know, while → 0, eq.(15) is same as the normal formula of pair correlation function in bins. The s2 is redefined as, s2 = −2πρδ

pm 

zp2 (gp ln gp − gp + 1),

(16)

p=1

where pm is the maximal index of the bins and we have already set a cutoff along z, gp is the bin-average value of g(z) at the pth bin, defined by the (15). Therefore, the derivative of the s2 and gp is continuous. For example, ∂s2 1  =− rˆkj c(rkj ), ∂Rk N j=k  c(z) = ln gp [Ke (zp + δ/2 − z) − Ke (zp − δ/2 − z].

(17)

p d Here c(z) → − dz ln g(z) under the limit → 0. Besides choosing some functionals of pair distribution function g(z) as the collective variables, another selection is possible, and may be better in some special systems. For more complex systems, for example, multi-component mixtures or macromolecular systems, etc., we should identify different kinds of atoms and calculate different pair distribution function gAB (z), where A and B are the types of atoms. An alternative method is to use energy distribution function. For any pair-interactive potential E = u(z), such as Lennard-Jones interaction, coulomb interaction, the distance rij of atom pairs can be replace by the interactive energy u(rij ), thus we can define the pair distribution function in energy space,  G(E) ∝ δ(E − u(rij ). (18) i

j=i

Actually, G(E) is a transformation of g(z). Since the interactive energy is more directly related to the dynamics, using G(E) replace g(z) might be a good idea in identifying conformations. In addition, while existing many kinds of interaction between atoms, we can define the G(E) for different kinds of interactive energy, for example G(Ebond ), G(Eangle ), G(Etorsion ), G(Elj ), G(Ecoul ) etc., thus it is possible to take into account higher-order correlation. In the G(E) cases, the bias force can be calculated similarly as that in the g(z) cases. We are testing this idea in water condensation, details will appear elsewhere. We have examined the general hyperdynamics method in a simple system of N identical Lennard-Jones (LJ) particles and studied the slow gas/liquid transition in the N V T ensemble. The main results and simulation details are published in ref. ([9]). In such a system, the potential energy of transitional conformations (liquid drops with critical size) is lower than that of the stable gas phase. Thus, simple bias potential based on potential energy [13] cannot work at all. In general, for entropy-important systems, using the potential alone is not sufficient to identify the transitional conformations and then form the bias

832

X. Zhou and Y. Jiang

potential. We also found that with only two functionals of g(z), the potential energy V and the two-body entropy s2 , as the collective variables, we were able to correctly produce the slow dynamics. By designing suitable bias potentials, we could reach almost any slow timescale gas-liquid transition in reasonable time frame: we used a 106 time boost factor to find the transition in the system with very small saturation shown in the Fig. (1) [9]).

Fig. 1. Top: the distributions of two body entropy S2 from non-biased and biased simulations, and the rebuilt distribution of the bias simulation. Here, the simulated system starts from gas phase with very small saturation. Bottom: the free energy profiles from the non-biased and biased simulations are compared. The inset shows the simulated samples in the (S2 , V ) space. The observed liquid phase does not show here.

To summarize, we have extended the hyperdynamics method to more general cases by inhomogeneously compressing time. Our approach directly generates an explicit method to design the bias potential. In simple systems, two-body entropy s2 as a functional of the pair distribution function provide a good approximation of the density field in identifying the important conformations and for constructing the bias potential without prior knowledge of the conformational space. The method can be applied in complex fluids, such as glass transition and liquid/solid transition of single or multi component Lennard-Jones fluids, water etc., and s2 should be still the leading collective variable in the complex systems. For more complex cases, for example, polymers, biological systems, in whole conformational space, it is possible that we will need too many collective variables in identifying transitions, so that it is very difficult to estimate the density probability in the higher-dimension collective variables. A possible improvement is to divide the whole conformational space into some small parts, and use a few collective variables locally in each part.

A General Long-Time Molecular Dynamics Scheme in Atomistic Systems

833

Acknowledgments This work was supported by the US DOE under contract No. DE-AC5206NA25396. We are grateful to K. Kremer, H. Ziock, S. Rasmussen and A. F. Voter for stimulating discussions, comments and suggestions.

References 1. Bortz, A. B. and Kalos, M. H. and Lebowitz, J. L.: A new algorithm for Monte Carlo simulation of Ising spin systems. J. Comp. Phys. 17, (1975) 10-18. 2. Bolhuis, P. G. and Chandler, D. and Dellago, C. and Geissler, P. L.: Transition Path Sampling: Throwing Ropes over Mountain Passes in the Dark. Ann. Rev. Phys. Chem. 53, (2002) 291-318. 3. Olender, R. and Elber, R.: Calculation of classical trajectories with a very large time step: Formalism and numberical examples. J. Chem. Phys. 105, (1996) 92999315. 4. Elber, R.: Long-timescale simulation methods. Curr. Opin. Struct. Bio. 15, (2005) 151-156 . 5. Voter, A. F.: A method for accelerating the molecular dynamics simulation of infrequent events. J. Chem. Phys. 106, (1997) 4665-4677. 6. Miron, R. A. and Fichthorn, K. A.: Accelerated molecular dynamics with the bondboost method. J. Chem. Phys. 119, (2003) 6210-6216. 7. Rahman, J. A. and Tully, J. C.: Puddle-skimming: An efficient sampling of multidimensional configuration space. J. Chem. Phys. 116, (2002) 8750-8760. 8. Hamelberg, D. and Shen, T.-Y. and McCammon, J. A.: Phosphorylation effects on cis/trans isomerization and the backbone conformation of serine-proline Motifs: accelerated molecular dynamics analysis. J. Am. Chem. Soc. 127, (2005) 19691974. 9. Zhou, X. and Jiang, Y. and Kremer, K. and Ziock, H. and Rasmussen, S.: Hyperdynamics for entropic systems: Time-space compression and pair correlation function approximation. Phys. Rev. E 74, (2006) R035701. 10. Green, H. S.: The Molecular Theory of Fluids. (North-Holland, Amsterdam, 1952); 11. Baranyai, A. and Evans, D. J.: Direct entropy calculation from computer simulation of liquids. Phys. Rev. A 40, (1989) 3817-3822. 12. Giaquinta, P. V. and Giunta, G. and PrestipinoGiarritta, S.: Entropy and the freezing of simple liquids., Phys. Rev. A 45 (1992) R6966 - R6968. 13. Steiner, M. M. and Genilloud, P.-A and Wilkins, J. W.: Simple bias potential for boosting molecular dynamics with the hyperdynamics scheme. Phys. Rev. B 57, (1998) 10236-10239.

A New Constitutive Model for the Analysis of Semi-flexible Polymers with Internal Viscosity Jack Xiao-Dong Yang1,2 and Roderick V.N. Melnik2 Department of Engineering Mechanics, Shenyang Institute of Aeronautical Engineering, Shenyang 110136, China Mathematical Modelling & Computational Sciences, Wilfrid Laurier University, Waterloo, Ontario, Canada N2L 3C5 {jyang,rmelnik}@wlu.ca 1

2

Abstract. The analysis of dynamics of semi-flexible polymers, such as DNA molecules, is an important multiscale problem with a wide range of applications in science and bioengineering. In this contribution, we show how accounting for internal viscosity in dumbbell-type models may render physically plausible results with minimal computational cost. We focus our attention on the cases of steady shear and extensional flows of polymeric solutions. First, the tensors with moments other than the second order moment are approximated. Then, the nonlinear algebraic equation for the second moment conformation tensor is solved. Finally, substituting the resulting conformation tensor into the Kramers equation of Hookean spring force, the constitutive equations for the model are obtained. The shear material properties are discussed in the context of different internal viscosities and our computational results are compared with the results of other methods applicable for high shear or extensional rates. Keywords: Polymeric fluid; Dumbbell model; Internal viscosity.

1

Introduction

The dynamics of polymeric fluids is an important multiple time scale problem, involving the quick speed of the small solvent molecules, fast movement of atomic particles that constitute the polymers and slow orientation of the polymer deformation [1]. Brownian dynamic simulation is the technique taking into account all the motions in the polymeric fluids. Coarse-grained models are often used in the analysis of the rheological properties of polymeric fluids. From the mesoscopic view, the fast movements of the atomic particles of the polymers are neglected in order to relieve the complexity. In the investigation of the mathematical problems for the coarse-grained models, multiscale method is also an efficient technique to obtain the approximate solutions [2]. The simplest, albeit extremely useful in applications, model for polymer solutions is the Hookean dumbbell model proposed by Kuhn [3], where a polymer molecule in dilute solution is represented by two beads connected by a spring Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 834–841, 2007. c Springer-Verlag Berlin Heidelberg 2007 

A New Constitutive Model for the Analysis of Semi-flexible Polymers

835

force. This mathematical simplification has contributed a lot to the development of constitutive models for investigating properties of polymeric fluids with dumbbell-type models [1,4,5]. In order to obtain a better agreement with experimental results, a few additions have been incorporated into the standard dumbbell model. Of particular importance for our further discussion are two of them: finitely extensible nonlinear elastic (FENE) property and internal viscosity of the spring. Taking into account these properties leads to a situation where the governing equations of conformation tensors become nonlinear and the resulting problem has no closed form solution unless an appropriate approximation is made. Brownian dynamic simulations have been used widely in applications of the FENE dumbbell model to the analysis of fluids and semi-flexible polymers. The results obtained on the basis of such simulations are often more accurate compared to approximate theoretical methodologies discussed in [6,7,8]. Nevertheless, Brownian dynamic simulations are time consuming and do not render a straightforward technique for the explanation of the underlying physical properties of fluids and polymers. A consequence of this is intensive research efforts in developing semi-analytical methodologies which should be combined with computational experiments in order to shed further light on such properties. From a mathematical point of view, these efforts are reducible to the construction of some closure forms for the equation written with respect to the conformation tensor in such a way that the governing constitutive equations for the system (after certain approximations) become analytically solvable. The main idea is to apply an ensemble averaging methodology to non-second moment terms so that closed form solutions could be obtained [2,9,10]. However, for the dumbbell model with internal viscosity there is a forth moment tensor in the governing equation which makes it difficult to find the closed form solutions. Therefore, a numerical methodology should be developed to simulate the dynamics of the polymeric system. In the context of rheological properties of complex fluids, this issue was previously discussed by a number of authors (e.g., [11,15]). Booij and Wiechen [12] used a perturbation solution approach to calculate the first order approximation of the internal viscosity parameter. More recently, based on the Gaussian closure method, the second moment tensor and higher moment tensors have been calculated by integration over the conformation distribution space [13,14,15]. In this contribution, a new approximation scheme is proposed to solve the governing equations analytically. The forth moment tensor is approximated by an expression involving the second moment tensor in order to obtain a set of nonlinear algebraic equations. Based on the analytical solutions of such equations, the material properties of the polymeric fluids in steady-state shear flows and extensional viscosities in extensional flows are discussed. The phenomena of shear thinning and attenuation of pressure drop have been found and the results have been compared to the results obtained by Brownian dynamics simulations and the Gaussian closure method in the context of high shear or extensional rates. Our results can explain the phenomenon of shear thinning by introducing the

836

J. Xiao-Dong Yang and R.V.N. Melnik

internal viscosity and obtaining better predictions compared to the traditional technique.

2

The Governing Equations

For the polymers in a Newtonian solvent with viscosity ηs described with the bead-spring-bead dumbbell model it is assumed that there is no interaction between the beads. Let us denote the viscous drag coefficient due to the resistance of the flow by ζ. For the dumbbell model with internal viscosity (IV), the spring ˙ force is a function of the configuration vector, Q, and configuration velocity Q. The force law in this case can be expressed as:     ˙ ˙ = HQ + K Q ⊗ Q Q, (1) F Q, Q Q2 where Q is the length of vector Q, H is the spring coefficient of the dumbbell model and K is a constant denoting the measurement of the IV. The dot indicates ˙ represents the velocity vector of differentiation with respect to time t, so that Q the dumbbells. By substituting equation (1) and the equation of motion of one bead into the continuity equation [1] ∂ψ ∂ ˙ =− · Qψ, ∂t ∂Q we can derive the diffusion equation      Q⊗Q ∂ψ ∂ 2H 2kT ∂ψ =− · δ−g − Q , · [κ · Q] ψ − ∂t ∂Q Q2 ζ ∂Q ζ

(2)

(3)

where δ is a unit matrix, g = 2ε/(1 + 2ε), and ε is the relative internal viscosity. Note that ε = K/2ζ and it can formally range from zero to infinity. Furthermore, for g = 0, equation (3) recovers the form of the diffusion equation for Hookean dumbbells without IV. The second moment conformation tensor Q ⊗ Q is of the most interest when calculating the stress tensor. The governing equation for the conformation tensor can be developed by multiplying the diffusion equation by the dyadic product Q ⊗ Q and integrating over the entire configuration space:    Q⊗Q 4kT 4H (1 − g) Q ⊗ Q − Q ⊗ Q(1) = δ − 3g 2 ζ Q ζ   (4) Q⊗Q⊗Q⊗Q −2gκ : . Q2 The subscript “(1)” denotes convected derivatives. In homogeneous flows the convected derivative is defined as A(1) =

∂ A − κ · A + A · κT . ∂t

(5)

A New Constitutive Model for the Analysis of Semi-flexible Polymers

837

Unfortunately, it is not possible to calculate the second tensor Q ⊗ Q

moment and Q⊗Q⊗Q⊗Q. directly because there are other moment terms, e.g., Q⊗Q Q2 Hence, in order to cast the governing equation into a form amenable to the analytical solution, the higher order terms should be approximated. This is done as follows:   Q⊗Q Q ⊗ Q , (6) ≈ 2 Q Q2 eq   Q⊗Q⊗Q⊗Q Q ⊗ Q ⊗ Q ⊗ Q . (7) ≈ Q2 Q2 eq The equations (6), (7) are key approximations allowing us to make the governing equation analytically solvable. Note that equation (6) is similar to the Perterlin approximation used in FENE dumbbell model. By using equations (6) and (7), the governing equation (2) can be cast in the following form:   Q ⊗ Q 4kT 4H δ − 3g (1 − g) Q ⊗ Q − Q ⊗ Q(1) = 2 ζ Q eq ζ (8) Q ⊗ Q ⊗ Q ⊗ Q −2gκ : . Q2 eq In the steady state flow case, when all the time-dependent terms can be neglected, equation (8) becomes a nonlinear algebraic equation with respect to Q ⊗ Q. In the next section, we will seek a closed form solution to this governing equation, followed by the material properties discussion in the case of steady state shear flow.

3 3.1

Results and Examples The Material Coefficients in Steady Shear Flows

First, we consider the steady state shear flow with the velocity vector given by v = (vx , vy , vz ) = (γy, ˙ 0, 0) , where γ˙ is the shear rate. The transpose of velocity vector gradient is ⎞ ⎛ 0 γ˙ 0 T κ = (∇v) = ⎝ 0 0 0 ⎠ . 000

(9)

(10)

The average value of the square of the end-to-end distance in equilibrium in shear flow with shear rate γ˙ can be represented as [1]  2 2kT 3kT 2 Q eq = + (λH γ) ˙ , H H

(11)

838

J. Xiao-Dong Yang and R.V.N. Melnik

where the time constant λH is defined by λH = ζ/4H. For convenience, we use the following notation for the conformation tensor ⎞ ⎛ Axx Axy Axz H Q ⊗ Q , (12) A = ⎝ Ayx Ayy Ayz ⎠ = kT Azx Azy Azz so that its convected differentiation in steady state situations gives

A(1) = − κ · A + A · κT .

(13)

Substituting equations (11)-(13) into (8) and equating corresponding tensor elements in the result, we conclude that the nonzero elements of A can be calculated as follows Axx =

˙ xy + 1 2λH γA 1−

2(λH γ) ˙ 2 −2λH γA ˙ xy g 3+2(λH γ) ˙ 2

,

Ayy = Azz =

1 1−

2(λH γ) ˙ 2 −2λH γA ˙ xy g 3+2(λH γ) ˙ 2

,

(14)

where Axy is the real root of 2

4 (λH γ) ˙

4λH γ˙



˙ 2 (λH γ)

2



2 3 2  2 g Axy + 2 1− 2 g Axy 2 3 + 2 (λ γ) ˙ 3 + 2 (λ γ) ˙ H H 3 + 2 (λH γ) ˙  2 2 ˙ 2 (λH γ) + 1− Axy − λH γ˙ = 0. 2g 3 + 2 (λH γ) ˙

(15)

For convenience, we use the Kramers equation for the stress tensor in the spring model: H τp =− Q ⊗ Q + δ = −A + δ. (16) nkT kT The three material functions of interest, namely, the viscosity η (γ), ˙ the firstnormal stress coefficient Ψ1 (γ), ˙ and the second-normal stress coefficient Ψ2 (γ) ˙ are connected with the stress components by the following relationships: τxy = −η (γ) ˙ γ, ˙ τxx − τyy = −Ψ1 (γ) ˙ γ˙ 2 , ˙ γ˙ 2 . τyy − τzz = −Ψ2 (γ)

(17)

Substituting equation (16) into (14), we obtain the material coefficients via the following representations: Axy η (γ) ˙ , = nkT λH λH γ˙ Ψ1 (γ) ˙ 2Axy  =  2 2 2(λ γ) ˙ −2λH γA ˙ xy H nkT λH 1− g λH γ˙ 2 3+2(λH γ) ˙ Ψ2 = 0.

(18)

A New Constitutive Model for the Analysis of Semi-flexible Polymers

839

Based on equations (15) and (18), we calculate the material properties for different internal viscosities. The plots presented in Figure 1 demonstrate comparison results between our approximate solutions (AS), the results obtained by Brownian dynamics simulations (BD), and the results obtained with the Gaussian closure technique (GC) [15]. The internal viscosity was chosen as ε = 1. In Figure 1, the solid lines represent our algebraic solutions to the governing equations for the material coefficients, the dots represent the data obtained by Brownian dynamics simulations, and the dot-dash lines represent the results obtained by the Gaussian closure method. For both viscosity and first-normal stress coefficients, our algebraic solutions exhibit the plateau values appearing also in the case of Brownian dynamics simulations. Observe that compared to the Gaussian closure method, our methodology has a wider range of applicability. 0

η /(nkTλH)

10

0

H

ψ /(nkTλ2 )

10

−1

1

10

−2

10

−1

10

AS BD GS

AS BD GC

−2

0

10

λHγ

2

10

10

(a) The viscosity coefficient

0

10

λ γ H

2

10

(b) The first-normal stress coefficient

Fig. 1. Comparison of analytical results with Brownian dynamics simulations and the Gaussian closure methodology (ε = 1)

3.2

The Extensional Viscosity in Steady Extensional Flows

Next, we consider the steady extensional flow with the velocity vector given by   1 1 ˙ (19) v = (vx , vy , vz ) = − , − , 1 ε, 2 2 where ε˙ is the extensional rate in the z direction. The transpose of velocity vector gradient is ⎞ ⎛ 1 −2 0 0 T ˙ (20) κ = (∇v) = ⎝ 0 − 21 0 ⎠ ε. 0 0 1 In the steady uniaxial extensional flow with strain rate ε, ˙ the extensional viscosity is defined as [16] μe =

2τzz − τxx − τyy . 6ε˙

(21)

840

J. Xiao-Dong Yang and R.V.N. Melnik

By using the equation (8), we get the solutions of the equation with respect to the conformation tensor by the same procedure used in the steady state shear flow case. Then the extensional viscosity is calculated as 2Azz − Axx − Ayy μe , = nkT λH 6ε˙

(22)

where Axx , Ayy and Azz are determined by the following set of algebraic equations: (λH ε˙ + 1) Axx + 43 gλH ε˙ (Azz − A11 ) = 1, (23) (−2λH ε˙ + 1) Axx + 43 gλH ε˙ (Azz − A11 ) = 1, Ayy = Axx .

10

μe/(nkTλH)

10

g=0.01 g=0.1 g=0.5 5

10

0

10

−4

10

−2

10

λ ε H

0

10

2

10

Fig. 2. The viscosity coefficient in the steady extensional flow

Figure 2 demonstrates decrease in extensional viscosity with higher rates of extensional flows. This explains the attenuation of the pressure drop in strong extensional flows.

4

Conclusions

In this contribution, we developed a set of approximate semi-analytical solutions for the dumbbell model with IV without integration of the Gaussian distribution. Our concise equations can predict the material coefficients of polymeric fluid well, qualitatively and also quantitatively. The shear thinning phenomena are described well with the new developed model deduced from the dumbbell model with internal viscosity. The effect of internal viscosity in the extensional flow case has also been demonstrated. By comparing our computational results with Brownian dynamic simulations and the Gaussian closure methodology, we demonstrated the efficiency of the proposed approximate technique for a wider range of high shear or extensional flow rates.

A New Constitutive Model for the Analysis of Semi-flexible Polymers

841

Acknowledgment. This work was supported by NSERC.

References 1. Bird R.B., Curtiss C.F., Armstrong R.C., Hassager O.: Dynamics of Polymer Liquids Vol. 2 Kinetic Theory. John Wiley & Sons (1987) 2. Nitsche L.C., Zhang W., Wedgewood L.E.: Asymptotic basis of the L closure for finitely extensible dumbbells in suddenly started uniaxial extension. J. NonNewtonian Fluid Mech. 133 (2005) 14-27 ¨ 3. Kuhn W. :Uber die Gestalt fadenf¨ ormiger Molek¨ ule in L¨ osungen. Kolloid Z. 68 (1934) 2-11 4. Bird R.B., Wiest J.M.: Constitutive equations for polymeric liquids. Annu. Rev. Fluid Mech. 27 (1995) 169-193 ¨ 5. Ottinger H.C.: Stochastic Processes in Polymeric Fluids: Tools and Examples for Developing Simulation Algorithms. Springer-Verlag, Berlin Heidlberg New York (1996) 6. Hur J.S., Shaqfeh E.S.G.: Brownian dynamics simulations of single DNA molecules in shear flow. J. Rheol. 44 (2000) 713-742 7. Hu X., Ding Z., Lee L.J.: Simulation of 2D transient viscoelastic flow using the CONNFFESSIT approach. J. Non-Newtonian Fluid Mech. 127 (2005) 107-122 8. Lozinski A., Chauviere C.: A fast solver for Fokker-Planck equation applied to viscoelastic flows calculations: 2D FENE model. J. Computational Physics 189 (2003) 607-625 ¨ 9. Herrchen M., Ottinger H.C.: A detailed comparison of various FENE dumbbell models. J. Non-Newtonian Fluid Mech. 68 (1997) 17-42 10. Keunings R.: On the Peterlin approximation for finitely extensible dumbbells. J. Non-Newtonian Fluid Mech. 68 (1997) 85-100 11. Hua C.C., Schieber J.D.: Nonequilibrium Brownian dynamics simulations of Hookean and FENE dumbbells with internal viscosity. J. Non-Newtonian Fluid Mech. 56 (1995) 307-332 12. Booij H.C., Wiechen, P.H.V.: Effect of internal viscosity on the deformation of a linear macromolecule in a sheared solution. J. Chem. Phys. 52 (1970) 5056-5068 13. Manke C.W., Williams M.C.: The internal-viscosit dumbbell in the high-IV limit: Implications for rheological modeling. J. Rheol. 30 (1986) 019-028 14. Schieber J.D.: Internal viscosity dumbbell model with a Gaussian approximation. J. Rheol. 37 (1993) 1003-1027 15. Wedgewood L.E.: Internal viscosity in polymer kinetic theory: shear flows. Rheologica Acta 32 (1993) 405-417 16. Tirtaatmadja V., Sridhar T.: A filament stretching device for measurement of extensional viscosity. J. Rheol. 37 (1993) 1081-1102

Coupled Navier-Stokes/DSMC Method for Transient and Steady-State Gas Flows Giannandrea Abbate1 , Barend J. Thijsse2 , and Chris R. Kleijn1 1

Dept. of Multi-Scale Physics & J.M.Burgers Centre for Fluid Mechanics, Delft University of Technology, Prins Bernhardlaan 6, Delft, The Netherlands [email protected],[email protected] http://www.msp.tudelft.nl 2 Dept. of Material Science and Engineering, Delft University of Technology, Mekelweg 2, Delft, The Netherlands [email protected] http://www.3me.tudelft.nl

Abstract. An adaptatively coupled continuum-DSMC approach for compressible, viscous gas flows has been developed. The continuum domain is described by the unsteady Navier-Stokes equations, solved using a finite volume formulation in compressible form to capture the shock. The molecular domain is solved by the Direct Simulation Monte Carlo method (DSMC). The coupling procedure is an overlapped Schwarz method with Dirichlet-Dirichlet boundary conditions. The domains are determined automatically by computing the Kn number with respect to the local gradients length scale. The method has been applied to simulate a 1-D shock tube problem and a 2-D expanding jet in a low pressure chamber. Keywords: Direct Simulation Monte Carlo; Coupled Method; Hybrid Method; Rarefied Gas Flow; Navier-Stokes solver.

1

Introduction

In several applications we are faced with the challenge to model a gas flow transition from continuum to rarefied regime. Examples include: flow around vehicles at high altitudes [1], flow through microfluidic gas devices [2], small cold thruster nozzle and plume flows [3], and low pressure thin film deposition processes or gas jets [4]. It is always very complicated to describe this kind of flows; in the continuum regime (Kn  1), Navier-Stokes equations can be used to model the flow, whereas free molecular flow (Kn  1) can be modelled using Molecular Dynamics models. For the intermediate Knudsen number ranges (Kn = 0.01 − 10), neither of the approaches is suitable. In this regime the best method to use is DSMC (Direct Simulation Monte Carlo). The computational demands of DSMC, however, scale with Kn−4 and when the Knudsen number is less than ∼ 0.05, its time and memory expenses become inadmissible. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 842–849, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Coupled N-S/DSMC Method for Transient and Steady-State Gas Flows

843

Different solutions have been proposed to compute such flows. The most standard uses a continuum solver with analytical slip boundary conditions [5]. This method is suitable only in conditions where Kn < 0.1 and the precise formulation of the slip boundary conditions is strongly geometry dependent. For this reason, several hybrid continuum/molecular models have been proposed, for instance: Molecular Dynamics (MD) and Navier-Stokes (N-S) equations [6], Boltzmann and N-S equations [7], Direct Simulation Monte Carlo (DSMC) and Stokes equations [2], DSMC and incompressible N-S equations [8], and DSMC and N-S equations [9,10,11,12,13]. In particular, Garcia et al.[9] constructed a hybrid particle/continuum method with an adaptive mesh and algorithm refinement. It was a flux-based coupling method with no overlapping between the continuum and the DSMC regions. On the contrary, Wu and al. [10] and Schwartzentruber and al. [11,12] proposed an ’iterative’ coupled CFD-DSMC method where the coupling is achieved through an overlapped Schwarz method with Dirichlet-Dirichlet type boundary conditions. However, both Wu and al. [10] and Schwartzentruber and al. [11,12] methods are only suitable for gas flow simulations under steady-state conditions, while the method that we propose has been applied both to transient and steadystate gas flow simulations. We consider, in the continuum regime, the compressible N-S equations and, in the transitional regime, DSMC because it is several order of magnitude more efficient than MD and the Boltzmann equations solvers. The coupling of the two models is reached through an overlapped Schwarz method [10] with DirichletDirichlet boundary conditions. It is an adaptative method in which, during the computations, the Kn number with respect to the local gradients is computed to determine and divide the CFD (Computational Fluid Dynamics) domain from the DSMC one.

2 2.1

The Coupling Method The CFD Solver

The CFD code used is a 2-D, unsteady code based on a finite volume formulation in compressible form to capture the shock. It uses an explicit, second-order, fluxsplitting, MUSCL scheme for the Navier-Stokes equations. Because a high temperature flow has to be modelled, a power-law temperature dependence was used for the viscosity μ and a model coming from kinetic gas theory for the thermal conductivity κ. The density was computed from the ideal gas law. 2.2

The Molecular Algorithm: DSMC

The 2-D DSMC code developed is based on the algorithm described in [14]. A ”particle reservoirs” approach was used to implement the inlet (outlet) boundary conditions. A Maxwell-Boltzmann or a Chapmann-Enskog velocity distributions can be used to generate molecules in those reservoirs.

844

2.3

G. Abbate, B.J. Thijsse, and C.R. Kleijn

Schwarz Coupling

We describe in this section two different strategies developed and implemented to couple the Navier-Stokes based CFD code to the DSMC code: One for steady state flow simulation, the other for unsteady flow simulations. Steady Formulation: We propose a hybrid coupling method based on the Schwarz method [10] and consisting of two stages. In the first stage the unsteady N-S equations are integrated in time on the entire domain Ω until a steady state is reached. From this solution, local Kn numbers with respect to the local gradients length scales [15] are computed according to λ |  Q| (1) Q where λ is the mean free path length and Q is a flow property (density, temperature etc.); The values of KnQ are used to split Ω in the subdomains ΩDSMC (Kn > Knsplit − ΔKn), where the flow field will be evaluated using the DSMC technique, and ΩCF D (Kn < Knsplit ), where N-S equation will be solved. For Knsplit a value of 0.05 was used. Between the DSMC and CFD regions an overlap region is considered, where the flow is computed with both the DSMC and the CFD solver; the value of ΔKn can be chosen in ”ad hoc” manner in order to vary the overlap region size. In the second stage, DSMC and CFD are run in their respective subdomains with their own time steps (ΔtDSMC and ΔtCF D , respectively), until a steady state is reached. First DSMC is applied; molecules are allocated in the DSMC subdomain according to the density, velocity and temperature obtained from the initial CFD solution. A Maxwell-Boltzmann or a Chapmann-Enskog distributions can be chosen to create molecules. It is important to say that the grid is automatically refined in the DSMC region in order to respect the DSMC requirements. The boundary conditions to the DSMC region come from the solution in the CFD region. As described in the previous section for the inlet (outlet) boundary, outside the overlapping region some ”particle reservoirs” are considered. In these cells molecules are created according to density, velocity, temperature and their gradients of the solution in the CFD region, with a Maxwell-Boltzmann or a Chapmann-Enskog distributions. After running the DSMC, the N-S equations are solved in the CFD region. The boundary conditions comes from the solution in the DSMC region averaged over the CFD cells. Once a steady state solution has been obtained in both the DSMC and N-S region, the local KnQ numbers are re-evaluated and a new boundary between the two regions is computed. This second stage is iterated until in the overlapping region DSMC and CFD solutions differ less than a prescribed value. We made an extensive study of the influence of various coupling parameters, such as the size of the overlap region (4 − 59 mean free path lengths) and the amount of averaging applied to the reduce DSMC statistical scatter (averaging over 5, 30 and 50 repeated runs). The influence of these parameters on the final solution was found to be small. KnQ =

Coupled N-S/DSMC Method for Transient and Steady-State Gas Flows

845

Unsteady Formulation: In the unsteady formulation, the described coupling method is re-iterated every coupling time step Δtcoupling >> ΔtDSMC , ΔtCF D , starting on the solution at the previous time step. As expected, in order to avoid instabilities, it was necessary to keep the Courant number (based on the coupling time step, the molecules most probable velocity, and the CFD grid cell size) below one. In the second stage, after every coupling step, the program compares the predicted DSMC region with the one of the previous step. In the cells that still belong to the DSMC region, we consider the same molecules of the previous time step whose properties were recorded. Molecules that are in the cells that no longer belong to the DSMC region are deleted. In cells that have changed from CFD into a DSMC cell, new molecules are created with a Maxwell-Boltzmann or a Chapmann-Enskog distribution, according to the density, velocity and temperature of the CFD solution at the previous time step. At the end of the every coupling step molecule properties are recorded to set the initial conditions in the DSMC region for the next coupling step.

3 3.1

Results 1-D Shock-Tube Problem

The unsteady coupling method was applied to the unsteady shock tube test case (fig.1).

Fig. 1. Shock tube test case

The code models a flow of Argon inside an 0.5m long tube between two tanks in different thermo-fluid-dynamic conditions. In the left tank there are a pressure of 2000P a and a temperature of 12000K, while in the tube and in the right tank there are a pressure of 100P a and a temperature of 2000K. When the membrane that separates the two regions breaks a shock travels in the tube from left to right. Upstream from the shock, the gas has high temperature and pressure, but gradient length scales are very small. Downstream of it both temperature and pressure are lower, but gradient length scales are large. As a result, the local Kn number KnQ is high upstream of the shock and low downstream. In the hybrid DSMC-CFD approach, DSMC is therefore applied upstream, and CFD downstream. The continuum grid is composed of 100 cells in x direction and 1 cell in y direction, while the code automatically refines the mesh in the DSMC region to fulfill its requirements. In the DSMC region molecules were created with the Chapman-Enskog distribution. It was demonstrated, in fact, that in a

846

G. Abbate, B.J. Thijsse, and C.R. Kleijn

hybrid DSMC/CFD method a Chapman-Enskog distribution is required when the viscous fluxes are taken into account, while a simple Maxwellian distribution is adequate when the continuum region is well approximated by the Euler equations [9]. The particle cross section was evaluated using the Variable Soft Sphere (VSS) model because it is more accurate than the Variable Hard Sphere (VHS) one to model viscous effect. The coupling time step is Δtcoupling = 2.0×10−6sec. and the ensemble averages of the DSMC solution to reduce the scattering were made on 30 repeated runs. In addition to the hybrid approach, the problem was also solved using CFD only and DSMC only (which was feasible because of the 1-D nature of the problem). The latter is considered to be the most accurate. In fig.2 the pressure inside the tube after 3.0 × 10−5 sec., evaluated with the hybrid (Schwarz coupling) method is compared with the results of the full DSMC simulation and the full CFD simulation.

Fig. 2. Pressure and Kn number in the tube after 3.0 × 10−5 sec

In the same picture also local Knudsen number KnQ , computed using the hybrid method, is compared with the full CFD simulation. From the results shown in fig.2, it is clear that the full CFD approach fails due to the high values of the local Kn number caused by the shock presence. The full CFD approach predicts a shock thickness less than 1 cm, which is unrealistic considering the fact that the mean free path near the shock is of the order of several centimeters. In the full DSMC approach, therefore, the shock is smeared over almost 20 cm. The results obtained with the hybrid approach are virtually identical to those obtained with the full DSMC solver, but they were obtained in less than one fifth of the CPU time. 3.2

2-D Expanding Jet in a Low Pressure Chamber

The steady-state coupling method was applied to a steady state expanding neutral gas jet in a low pressure chamber (fig.3). The code models an Argon jet, at a temperature of 6000K and Mach number 1, injected from the top in a 2-D chamber of dimensions 0.32m × 0.8m, through a slot of 0.032m. The pressure inside the chamber is kept at a value of 10P a through two slots of 0.04m wide disposed on its lateral sides at a distance 0.6m from the top. Walls are cooled at a temperature of 400K. The continuum grid

Coupled N-S/DSMC Method for Transient and Steady-State Gas Flows

847

Fig. 3. Expanding jet in a low pressure deposition chamber test case

Fig. 4. Kn number and CFD/DSMC domains splitting

is composed of 50 cells in x direction and 160 in y direction while the code automatically refines the mesh in the DSMC region to fullfill its requirements. In the DSMC region, molecules were created with the Chapman-Enskog distribution and the particle cross section was evaluated using the VSS model. Fig.4 shows the Knudsen number in the chamber, respectively evaluated with reference to the inlet dimension (KnL ) and to the local temperature gradient length scale (KnT ). The pressure does not change a lot in the domain. Around the inlet the temperature is high and, because of the presence of a shock, gradient length scales are small. In the rest of the chamber the temperature is lower and

848

G. Abbate, B.J. Thijsse, and C.R. Kleijn

Fig. 5. Velocity and temperature fields in the deposition chamber

gradient length scales are large. As a result the Kn number is high around the inlet and low in the rest of the domain. In the right-hand side of Fig.4, the resulting division between the DSMC, CFD and overlapping regions is shown. In fig.5 the velocity and temperature fields, evaluated with the hybrid (Schwarz coupling) method, are compared with the results of a full CFD simulation. It is evident that the DSMC region influences the flow field and its effects are present in a region wider then the DSMC and overlapping regions alone. Far away from the DSMC region, however, the full CFD and the hybrid method give the very similar results.

4

Conclusions

A hybrid continuum-rarefied flow simulation method was developed to couple a Navier-Stokes description of a continuum flow field with a DSMC description of a rarefied one. The coupling is achieved by an overlapped Schwarz method implemented both for steady state and transient flows. Continuum subdomain boundary conditions are imposed on the molecular subdomain via particle reservoirs. The molecular subdomain boundary conditions are imposed on the continuum subdomain using simple averaging. The subdomains are determined automatically by computing the Kn number with respect to the local gradients length scale on a preliminary Navier-Stokes solution. The method has been applied to a shock tube and to a 2-D expanding jet in a low pressure chamber problems showing its capability of predicting the flow field even where a CFD solver fails.

Coupled N-S/DSMC Method for Transient and Steady-State Gas Flows

849

Acknowledgments. We thank Profs. D.C.Schram and M.C.M.Van de Sanden for usefull discussions and the DCSE (Delft Centre for Computational Science and Engineering) for financial support.

References 1. F.Sharipov, Hypersonic flow of rarefied gas near the brazilian satellite during its re-entry into atmosphere, Brazilian Journal of Physics, vol.33, no.2, June 2003 2. O.Aktas, N.R.Aluru, A combined Continuum/DSMC technique for multiscale analysis of microfluidic filters, Journal of Computational Physics 178, 342–372 (2002) 3. C.Cai, I.D.Boyd, 3D simulation of Plume flows from a cluster of plasma thrusters, 36th AIAA Plasmadynamics and Laser Conference, 6-9 June 2005, Toronto, Ontario, Canada, AIAA-2005-4662 4. M.C.M.van de Sanden, R.J.Severens RJ, J.W.A.M.Gielen et al. (1996), Deposition of a-Si:H and a-C:H using an expanding thermal arc plasma, Plasma sources Science and Technology 5 (2): 268–274 5. B.Alder, Highly discretized dynamics, Physica A 240 (1997) 193-195 6. N.G.Hadjiconstantinou, Hybrid Atomistic-Continuum formulations and moving contact-line problem, Journal Computational Physics 154, 245–265 (1999) 7. P.Le Tallec, F.Mallinger, Coupling Boltzmann and Navier-Stokes Equations by half fluxes, Journal Computational Physics 136, 51–67 (1997) 8. H.S.Wijesinghe, N.G. Hadijconstantinou, Discussion of hybrid AtomisticContinuum methods for multiscale hydrodynamics, International Journal for multiscale Computational Engineering, 2(2)189-202 (2004) 9. A.L.Garcia, J.B.Bell, W.Y.Crutchfield, B.J.Alder, Adaptative mesh and algorithm refinement using Direct Simulation Monte Carlo, Journal of Computational Physics 154, 134-155 (1999) 10. J.S.Wu, Y.Y.Lian, G.Cheng. R.P.Koomullil, K.C.Tseng, Development and verification of a coupled DSMC-NS scheme using unstructured mesh, Journal of Computational Physics 219, 579-607 (2006) 11. T.E.Schwartzentruber, L.C.Scalabrin, I.D. Boyd, Hybrid Particle-Continuum Simulations of Non-Equilibrium Hypersonic Blunt Body Flows, AIAA-2006-3602, June 2006, San Francisco, CA. 12. T.E.Schwartzentruber, I.D. Boyd, A hybrid particle-continuum method applied to shock waves, Journal of Computational Physics, 215, No. 2, 402-416 (2006). 13. A.J.Lofthouse, I.D.Boyd, M.J.Wright, Effects of Continuum Breakdown on Hypersonic Aerothermodynamics, AIAA-2006-0993, January 2006, Reno, NV. 14. G.A.Bird, Molecular gas dynamics and Direct Simulation Monte Carlo, Claredon Press Oxford Science, 1998 15. Wen-Lan Wang, I.D.Boyd, Continuum Breakdown in Hypersonic Viscous Flows, 40th AIAA Aerospace Sciences Meeting and Exhibit, January 14-17, 2002, Reno, NV

Multi-scale Simulations of Gas Flows with Unified Flow Solver V.V. Aristov1, A.A. Frolova1, S.A. Zabelok1, V.I. Kolobov2, and R.R. Arslanbekov2 1

Dorodnicyn Computing Center of the Russian Academy of Sciences Vavilova str., 40, 119991, Moscow, Russia {aristov,afrol,serge}@ccas.ru 2 CFD Research Corporation, 215 Wynn Drive, Huntsville, AL , 35803, USA {vik,rra}@cfdrc.com

Abstract. The Boltzmann kinetic equation links micro- and macroscale descriptions of gases. This paper describes multi-scale simulations of gas flows using a Unified Flow Solver (UFS) based on the Boltzmann equation. A direct Boltzmann solver is used for microscopic simulations of the rarefied parts of the flows, whereas kinetic CFD solvers are used for the continuum parts of the flows. The UFS employs an adaptive mesh and algorithm refinement procedure for automatic decomposition of computational domain into kinetic and continuum parts. The paper presents examples of flow problems for different Knudsen and Mach numbers and describes future directions for the development of UFS technology. Keywords: Boltzmann equation, Rarefied Gas Dynamics, direct Boltzmann solver, kinetic CFD scheme, multiscale flows, adaptive mesh.

1 Introduction The Boltzmann kinetic equation is a fundamental physical tool, which links two levels of description of gases. The first one is the microscopic level at atomistic scale and the second one is the macroscopic level at continuum scale. The kinetic equation takes into account interactions among gas particles under the molecular chaos assumption to describe phenomena on the scale of the mean free path. In principle, the Boltzmann equation can also describe macroscopic phenomena, but its usage at the macroscopic scale becomes prohibitory expensive and not necessary. The continuum equations (Euler or Navier-Stokes) derived from the Boltzmann equations can be used at this scale. The present paper describes mathematical methods, which allow one to fully realize this property of the Boltzmann equation. The Unified Flow Solver (UFS) is a variant of hybrid methods for simulation of complex gas (or maybe liquid) flows in the presence of multiscale phenomena including nonequilibrium and nonlinear processes. The UFS methodology combines direct numerical solution of the Boltzmann equation with the kinetic schemes (asymptotic case of the direct methods of solving the Boltzmann equation), which approximate the Euler or Navier-Stokes (NS) equations. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 850 – 857, 2007. © Springer-Verlag Berlin Heidelberg 2007

Multi-scale Simulations of Gas Flows with Unified Flow Solver

851

The development of hybrid solvers combining kinetic and continuum models has been an important area of research over the last decade (see Ref. [1] for review). Most researchers applied traditional DSMC methods for rarefied domains and identified statistical noise inherent to the particle methods as an obstacle for coupling kinetic and continuum solvers [2]. The UFS combines direct Boltzmann solver [3,4] with kinetic CFD schemes to facilitate coupling kinetic and continuum models based on continuity of half-fluxes at the boundaries [5]. The UFS uses Cartesian grid in physical space, which is generated automatically around objects embedded in computational domain. A continuum solver is running first and the computational grid in physical space is dynamically adapted to the solution. Kinetic domains are identified using continuum breakdown criteria, and the Boltzmann solver replaces the continuum solver where necessary. The Boltzmann solver uses Cartesian mesh in velocity space and employs efficient conservative methods of calculating the collision integral developed by Tcheremissine [6]. The parallel version of the UFS enforces dynamic load balance among processors. Domain decomposition among processors is performed using spacefilling curves (SFC) with different weights assigned to kinetic and continuum cells depending on CPU time required for performing computations in the cell [7]. This paper presents illustrative examples of UFS applications for supersonic and subsonic problems and outlines directions for future development of UFS methodology for simulating complex multi-scale flows.

2 Examples of Multi-scale Simulations with UFS The UFS has already been demonstrated for a variety of steady-state laminar flows with different Knudsen and Mach numbers [5]. In this Section, we present some examples to illustrate important features of the UFS. 2.1 High Speed Flows A comparison of UFS results with available experimental data for supersonic flows of single component atomic gases was performed for a 1D shock wave structure and 2D flow around a circular cylinder [5]. Here we show some new results for hypersonic flows and extensions for gas mixtures. Figure 1 shows an example of 2D solutions by

Fig. 1. Axial velocity for OREX at M=27, Kn=0.1 (on the left). Gas pressure for a prism at M=18, Kn=0.25 (on the right).

852

V.V. Aristov et al.

the kinetic Navier-Stokes solver for the Orbital Reentry Experiment [9] (left) and flow around a prism (right). Kinetic scheme for the NS equation is derived from the Boltzmann equation as described in [8, 5] Figure 2 illustrates interaction of supersonic jets of two gases with different masses. The kinetic Euler solver for gas mixtures is based on the model of Ref. [10]. Two free jets at Mach number M=3 are considered with the ratio of inlet pressure to bulk pressure of 10, and the mass ratio of 2. The distributions of gas density and longitudinal velocity are shown in Fig. 2 for a steady state. An asymmetry with respect to y=0 can be explained by the different masses of molecules in the top and below jets. 3.5

1.1 0.08

0.08 1

0.06

0.06

0.9

0.04

0.8

0.04

0.02

0.7

0.02

0

0.6

-0.02

0.5

3

r (cm)

r (cm)

2.5

2

0 -0.02

1.5

0.4

-0.04

-0.04 0.3

-0.06

1

-0.06 0.2

-0.08 0.1 0

0.05 x (cm)

0.1

0.15

-0.08 0.5 0

0.05 x (cm)

0.1

0.15

Fig. 2. Density (left) and longitudinal velocity (right) for a steady regime

The UFS is currently being enhanced with non-equilibrium chemistry (based on the model kinetic equation by Spiga & Groppi [11]) coupled to radiation transport and plasma capabilities. This will bring the fidelity of modeling high speed flows of molecular gases to a next level and enable accurate prediction of aerothermal environment around trans-atmospheric vehicles. 2.2 Low Speed Flows There is a class of low-speed continuum flows, which are not described by the traditional NS equations [12]. The Ghost and non-NS effects present themselves in well-known classical fluid problems such as Benard and Taylor-Couette problems. These effects can have important practical implications for low-pressure industrial processing reactors [13]. The statistical DSMC methods are not well suited for simulation of low speed problems, especially for studies of instabilities and transient effects due to large statistical noise. Figure 3 illustrates a low-speed flow induced by temperature gradients between two non-uniformly heated plates. This flow is absent according to the traditional NS equations with slip boundary conditions. Both the direct Boltzmann solver and the kinetic NS solver produce correct physical picture of the flow shown in Fig. 3. The temperature T of surfaces goes from 1.7 to 1 (hot bottom and cold top), bottom is symmetry, top and left boundaries have T=1.

Multi-scale Simulations of Gas Flows with Unified Flow Solver

853

Fig. 3. Temperature driven vortex: temperature and velocity fields for three values of Knudsen numbers (Kn=0.01, 0.07, 0.3 from left to right). Kinetic and continuum zones are shown in the middle Figure corner: dark – continuum, grey – kinetic zones.

The UFS has been used for simulations of gas flows in micro-channels and nozzles. We have recently confirmed the Knudsen minimum in the dependence of the mass flow rate on Knudsen number for a 2D isothermal flow in a channel using the Bolzmann solver with hard sphere molecules.

3 Future Directions The Boltzmann kinetic equation provides an appropriate physical and mathematical apparatus for description of gas flows in different regimes. The UFS approach can serve the instrument for practical simulations of complex multi-scale flows including unstable and turbulent regimes. For small Kn numbers (large Re numbers) the kinetic Euler solver can be used for both laminar flows and unstable coherent large-scale structures. For smaller scales, we can select NS solver or kinetic solver according to specific criteria. In any case we assume that the kinetic solver can adequately describe micro scales for unstable and turbulent flows (but can be very expensive computationally). From the kinetic point of view, the transition to unstable turbulent regimes can be treated as appearance of virtual structures in phase space with different scales in physical and velocity spaces. Strong deviations from equilibrium in velocity space can explain rapid growth of dissipation in turbulent flows. The applicability of NS equations in these regimes and the necessity to replace the reological linear model by more complex nonlinear models remain discussion topics [14]. The kinetic description is expected to describe correctly all flow properties at microscopic level since the smallest vortices of turbulent flows remain of the order of 100 mean free paths. The unstable flows already analyzed with the Boltzmann equation [15-17] required sufficiently fine velocity grid, but the spatial and temporal scales remained macroscopic, although smaller than for the laminar flows. For some parameters (frequencies of chaotic pulsations and the density amplitude) there was agreement with the experimental data for a free supersonic jet flows and computations from [15-17, 3]. The capabilities of the kinetic approach have been confirmed in independent investigations of different unstable and turbulent gas flows by the DSMC schemes [18, 19] and by the BGK equation [20].

854

V.V. Aristov et al.

3.1 Instabilities 3D solutions of the Boltzmann equation for two Kn numbers for stable and unstable regimes are shown in Fig. 4 for a free underexpanded supersonic jet flow. The gas flows from left to right through a circular orifice, the Mach number at the orifice is M=1. The pressure ratio of flowing gas (to the left from the orifice) to the gas at rest (to the right from the orifice p0/p1=10. One can see in Fig. 4 that for the large Reynolds number (small Knudsen number) the gas flow is unstable. The mechanism of this instability is connected with the appearance of so-called Taylor-Gertler vortices. The Boltzmann equation has been solved by the direct conservative method with the spatial steps larger than the mean free path. The calculated system of vortices has been observed in experiments [21], and the kinetic simulations reproduced the experimentally observed macroscopic structures. d

c oscoc s uc u es.

Fig. 4. Velocity fields in cross-sections for 5 values of x-coordinates. There is no vorticity in the plot for the larger Knudsen number, - a). Pairs of the longitudinal vortices are shown - б). A quarter for each cross section is depicted.

3.2 Multi-scale Simulation of Turbulent Flows Modern approach to simulation of turbulent flows uses Euler, NS and Boltzmann models for different parts of the flows [22]. The Euler models are used for large-scale structures, whereas kinetic models are used for small-scale stochastic background. The Boltzmann equation for the velocity distribution of gas particles is used for compressive gas flows. For liquids, there is no universal kinetic equation for the probability density of instant flow velocity. Often, kinetic equation for turbulent background is used in the form resembling Boltzmann transport equation for gases. The UFS can serve a useful tool for the first principle simulations of turbulent gas flows. The macro-scale eddies can be described by the Euler or NS equations and the Boltzmann solver used for micro-scale phenomena. For liquids, semi-empirical

Multi-scale Simulations of Gas Flows with Unified Flow Solver

855

kinetic equations of the Onufriev-Lundgren type could be used at the micro-scale. Molecular dynamic simulations can help justify and improve these equations for liquids. Additional research is needed to understand how to properly expand the UFS methodology for complex turbulent flows. The macro-scale coherent structures of turbulent flows can be well described by the Euler or NS equations. Fig. 5 shows an example of 2D UFS simulations (here the kinetic NS solver is used) of unstable phenomena appeared in the wake behind a prism at M=3, Kn=10-5 for the angle of attack of 3 degrees.

Fig. 5. Instantaneous Mach number (left) in a range 9 10-3 .05. (d) combined volume visualization of the ter the geometric model 256x256x32 MRI data of a canine head, with the embedded subset of hexahedral finite element mesh of the was obtained, geometric segmented canine brain. flow smoothed the geometric model and a geometric volumetric map using the signed distance function method was created. The hexahedral mesh was generated using an octree-based isocontouring method. Geometric flow [17], pillowing [7] and the optimization method were used to improve the mesh quality. The constructed hexahedral mesh has two important

976

C. Bajaj et al.

properties: good aspect ratios and there is at most one face lying on the boundary for each element. The day of treatment, a FFT-based technique is used to register the finite element mesh to the current position of the patient. The registration software has been rigorously tested against a suite of validation experiments using phantom materials. The phantom materials are fabricated with two materials of contrasting image density in which an inner smaller object is placed asymmetrically within the larger object. The materials are composed of 2 % agar gel and at least three 2 mm nylon beads are introduced as fiducials. The suite of data consists of several 3D images of incremental translational and rotational rigid body motions of the phantom material as well as images of incremental deformation of the phantom material. The data is provided for the image registration community from the DDDAS project webpage2 . The final image processing step is to overlay the MRTI thermal data onto the finite element mesh. A median and Deriche filter are used to remove the inherent noise from the MRTI data, Figure 5. The filtered MRTI data is interpolated onto the finite element solution space. The order of interpolation is determined by the order of the mesh.

4

Calibration, Optimal Control, and Error Estimation

Pennes model [10] has been shown [5,16,14] to provide very accurate prediction of bioheat transfer and is used as the basis of the finite element prediction. The control paradigm involves three major problems: calibration of the Pennes bioheat transfer model to patient specific MRTI data, optimal positioning and power supply of the laser heat source, and computing goal oriented error estimates. During the laser treatment process, all three problems are solved in tandem by separate groups of processors communicating amongst each other as needed. The variational form of the governing Pennes bioheat transfer model is as follows: Given a set of model, β, and laser, η, parameters,   Find u(x, t) ∈ V ≡ H 1 [0, T ], H 1(Ω) s.t. ∀v ∈ V

B(u, β; v) = F (η; v)

where the explicit functional dependence on the model parameters, β, and laser parameters, η = (P (t), x0 ), are expressed as follows 

T

 

B(u, β; v) = 0

Ω

∂u ρcp v + k(u, β)∇u · ∇v + ω(u, β)cblood (u − ua ) v ∂t   T hu v dAdt + u(x, 0) v(x, 0) dx + 0

2

∂ΩC

Project Website: dddas.ices.utexas.edu

Ω

 dxdt

Using Cyber-Infrastructure for Dynamic Data Driven Laser Treatment



977



exp(−μef f x − x0 ) v dxdt 4πx − x0  0 Ω  T  T  + hu∞ v dAdt − G v dAdt + u0 v(x, 0) dx 0 0 ∂ΩC ∂ΩN Ω  μef f = 3μa μtr μtr = μa + μs (1 − γ)

 J are bounded functions of u, cp and cblood are and ω s kg Here k s·m·K m3 the specific heats, ua the arterial temperature, ρ is the density, and h is the coefficient of cooling. P is the laser power, μa , μs are laser coefficients related to laser wavelength and give probability of absorption of photons by tissue, γ is the anisotropy factor, and x0 is the position of laser photon source. Constitutive model data and details of the optimization process are given in [8,11]. F (η; v) =

5

T

3P (t)μa μtr

Data Transfer, Visualization, and Current Results

Conventional data transfer methods and software rendering visualization tools pose a major bottleneck in developing a laser treatment paradigm in which high performance computers control the bioheat data transferred from a remote site. The data transfer problem is addressed through the use of client-server applications that use a remote procedure calling protocol to transfer data directly between physical memory instead of incurring the overhead of a writing to disk and transferring data. Volume Rover [1] is able to achieve high performance interactive visualization through the use of modern programmable graphics hardware to provide combined geometry and volume rendering displays, Figure 4. Software rendering is limited by the memory and processor. Computational time used to advance the Pennes model equations forward in time is not a bottleneck. Computations are done at the Texas Advanced Computing Center on a Dual-Core Linux Cluster. Each node of the cluster contains two Xeon Intel Duo-Core 64-bit processors (4 cores in all) on a single board, as an SMP unit. The core frequency is 2.66GHz and supports 4 floating-point operations per clock period. Each node contains 8GB of memory. The average execution times of a representative 10 second simulation is approximately 1 second, meaning that in a real time 10 second span Pennes model can predict out to more than a minute. Equivalently, in a 10 second time span, roughly 10 corrections can be made to calibrate the model coefficients or optimize the laser parameters. The typical time duration of a laser treatment is about five minutes. During a five minute span, one set of MRTI data is acquired every 6 seconds. The size of each set of MRTI data is ≈330kB (256x256x5 voxels). Computations comparing the predictions of Pennes model to experimental MRTI taken from a canine brain show very good agreement, Figure 5. A manual craniotomy of a canine skull was preformed to allow insertion of an interstitial laser fiber. A finite element mesh of the biological domain generated from the MRI data is shown in Figure 4. The mesh consists of 8820 linear elements with a total of

978

C. Bajaj et al.

(a)

(b)

(c)

Fig. 5. (a) Contours of Pennes model prediction overlayed onto the finite element mesh. (b),(c) Simultaneous cutline comparison of Pennes model prediction, Filtered MRTI data, and Unfiltered MRTI data. Cutline taken through laser source.

9872 degrees of freedom. MRTI thermal imaging data was acquired in the form of five two dimensional 256x256 pixel images every six seconds for 120 time steps. The spacing between images was 3.5mm. The MRTI data was filtered then projected onto the finite element mesh. Figure 5 shows a cutline comparison between the MRTI data and the predictions of Pennes model. It is observed that the results delivered by the computational Pennes model slightly over diffuses the heat profile peaks compared to measured values. However, at early times the maximum temperature value is within 5% of the MRTI value.

6

Conclusions

Results indicate that reliable finite element model simulations of hyperthermia treatments can be computed, visualized, and provide feedback in the same time span that the actual therapy takes place. Combining these prediction capabilities with an understanding of HSP kinetics and damage mechanisms at the cellular and tissue levels due to thermal stress will provide a powerful methodology for planning and optimizing the delivery of hyperthermia therapy for cancer treatments. The entire closed control loop in currently being tested on agar and ex-vivo tissue samples in preparation for the first real time computer guided laser therapy, which is anticipated within the upcoming year. The culmination of adaptive hp-finite element technology implemented on parallel computer architectures, modern data transfer and visualization infrastructure, thermal imaging modalities, and cellular damage mechanisms to provide cancer treatment tool will be a significant achievement in the field of computational science. Acknowledgments. The research in this paper was supported in part by the National Science Foundation under grants CNS-0540033, IIS-0325550, and NIH Contracts P20RR0206475, GM074258. The authors also acknowledge the important support of DDDAS research by Dr. Frederica Darema of NSF.

Using Cyber-Infrastructure for Dynamic Data Driven Laser Treatment

979

References 1. C. Bajaj, Z. Yu, and M. Aue. Volumetric feature extraction and visualization of tomographic molecular imaging. Journal of Structural Biology, 144(1-2):132–143, October 2003. 2. Satish Balay, William D. Gropp, Lois C. McInnes, and Barry F. Smith. Petsc users manual. Technical Report ANL-95/11 - Revision 2.1.5, Argonne National Laboratory, 2003. 3. W. Jiang, M. Baker, Q. Wu, C. Bajaj, and W. Chiu. Applications of bilateral denoising filter in biological electron microscopy. Journal of Structural Biology, 144:Issues 1-2:114–122, 2003. 4. M. Kangasniemi et al. Dynamic gadolinium uptake in thermally treated canine brain tissue and experimental cerebral tumors. Invest. Radiol., 38(2):102–107, 2003. 5. J. Liu, L. Zhu, and L. Xu. Studies on the three-dimensional temperature transients in the canine prostate during transurethral microwave thermal therapy. J. Biomech. Engr, 122:372–378, 2000. 6. R. J. McNichols et al. MR thermometry-based feedback control of laser interstitial thermal therapy at 980 nm. Lasers Surg. Med., 34(1):48–55, 2004. 7. S. A. Mitchell and T. J. Tautges. Pillowing doublets: refining a mesh to ensure that faces share at most one edge. In Proc. 4th International Meshing Roundtable, pages pages 231–240, 1995. 8. J. T. Oden, K. R. Diller, C. Bajaj, J. C. Browne, J. Hazle, I. Babuˇska, J. Bass, L. Demkowicz, Y. Feng, D. Fuentes, S. Prudhomme, M. N. Rylander, R. J. Stafford, and Y. Zhang. Dynamic data-driven finite element models for laser treatment of prostate cancer. Num. Meth. PDE,, accepted. 9. J. T. Oden and S. Prudhomme. Goal-oriented error estimation and adaptavity for the finite element method. Computers and Mathematics with Applications, 41(5–6):735–756, 2001. 10. H. H. Pennes. Analysis of tissue and arterial blood temperatures in the resting forearm. J. Appl. Physiol., 1:93–122, 1948. 11. M. N. Rylander, Y. Feng, J. Zhang, J. Bass, Stafford R. J., J. Hazle, and K. Diller. Optimizing hsp expression in prostate cancer laser therapy through predictive computational models. J. Biomed Optics, 11:4:041113, 2006. 12. R. Salomir et al. Hyperthermia by MR-guided focused ultrasound: accurate temperature control based on fast MRI and a physical model of local energy deposition and heat conduction. Magn. Reson. Med., 43(3):342–347, 2000. 13. K. Shinohara. Thermal ablation of prostate diseases: advantages and limitations. Int. J. Hyperthermia, 20(7):679–697, 2004. 14. J.W. Valvano and et al. An isolated rat liver model for the evaluation of thermal techniques to measure perfusion. ASME J. Biomech. Eng., 106:187–191, 1984. 15. F. C. Vimeux et al. Real-time control of focused ultrasound heating based on rapid MR thermometry. Invest. Radiol., 34(3):190–193, 1999. 16. L. Xu, M.M. Chen, K.R. Holmes, and H. Arkin. The evaluation of the pennes, the chen-holmes, the weinbaum-jiji bioheat transfer models in the pig kidney vortex. ASME HTD, 189:15–21, 1991. 17. Y. Zhang, C. Bajaj, and G. Xu. Surface smoothing and quality improvement of quadrilateral/hexahedral meshes with geometric flow. In Proceedings of 14th International Meshing Roundtable, volume 2, pages 449–468., 2005.

Grid-Enabled Software Environment for Enhanced Dynamic Data-Driven Visualization and Navigation During Image-Guided Neurosurgery Nikos Chrisochoides1, Andriy Fedorov1, Andriy Kot1 , Neculai Archip2 , Daniel Goldberg-Zimring2 , Dan Kacher2 , Stephen Whalen2 , Ron Kikinis2 , Ferenc Jolesz2 , Olivier Clatz3 , Simon K. Warfield3 , Peter M. Black4 , and Alexandra Golby4 College of William & Mary, Williamsburg, VA, USA Department of Radiology, Brigham and Women’s Hospital, Boston, MA, USA 3 Department of Radiology, Children’s Hospital, Boston, MA, USA Department of Neurosurgery, Brigham and Women’s Hospital, Boston, MA, USA 1

2

4

Abstract. In this paper we present our experience with an Image Guided Neurosurgery Grid-enabled Software Environment (IGNS-GSE) which integrates real-time acquisition of intraoperative Magnetic Resonance Imaging (IMRI) with the preoperative MRI, fMRI, and DT-MRI data. We describe our distributed implementation of a non-rigid image registration method which can be executed over the Grid. Previously, non-rigid registration algorithms which use landmark tracking across the entire brain volume were considered not practical because of the high computational demands. The IGNS-GSE, for the first time ever in clinical practice, alleviated this restriction. We show that we can compute and present enhanced MR images to neurosurgeons during the tumor resection within minutes after IMRI acquisition. For the last 12 months this software system is used routinely (on average once a month) for clinical studies at Brigham and Women’s Hospital in Boston, MA. Based on the analysis of the registration results, we also present future directions which will take advantage of the vast resources of the Grid to improve the accuracy of the method in places of the brain where precision is critical for the neurosurgeons. Keywords: Grid, Data-Driven Visualization.

1

Introduction

Cancer is one of the top causes of death in the USA and around the world. Medical imaging, and Magnetic Resonance Imaging (MRI) in particular, provide great help in diagnosing the disease. In brain cancer cases, MRI provides 

This research was supported in part by NSF NGS-0203974, NSF ACI-0312980, NSF ITR-0426558, NSF EIA-9972853.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 980–987, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Grid-Enabled Software Environment

981

extensive information which can help to locate the tumor and plan the resection strategy. However, deformation and shift of brain structures is unavoidable during open brain surgery. This creates discrepancies as compared to the preoperative imaging during the operation. It is possible to detect the brain shift during the surgery. One of the means to do this is IMRI. IMRI provides sparse dynamic measurements, which can be used to align (register) the preoperative data accordingly. In this way, high-quality, multimodal preoperative imaging can be used during the surgery. However, registration is a computationally-intensive task, and it cannot be initiated before IMRI becomes available. Local computing resource available at a particular hospital may not allow to perform this computation in time. The goal of our research is to use geographically distributed computing resources to expedite the completion of this computation. In the on-going collaboration between Brigham and Women’s Hospital (BWH) in Boston, MA, and College of William and Mary (CWM) in Williamsburg, VA, we are studying how widely available commodity clusters and Grid resources can facilitate the timely delivery of registration results. We leverage our work from the state-of-the art registration method [1], and extensive experience with distributed processing and dynamic load balancing [2,3]. We have designed a robust distributed implementation of the registration method, which meets the following requirements concerning (1) execution speed, (2) reliability, (3) ease-of-use and (4) portability of the registration code. We evaluate this prototype implementation in a geographically-distributed environment, and outline how IGNS computations can benefit from large-scale computing resources like TeraGrid .

2 2.1

Near-Real-Time Non-rigid Registration Registration Algorithm

The registration method was first presented in [1], and subsequently evaluated in [4]. The computation consists of preoperative and intraoperative components. Intraoperative processing starts with the acquisition of the first intraoperative scan. However, the time-critical part of the intraoperative computation is initiated when a scan showing shift of the brain is available. The high-level timeline of this process is shown in Fig. 1. Here we briefly describe the three main steps of the algorithm: 1. the patient-specific tetrahedral mesh model is generated from the segmented intra-cranial cavity (ICC). The ICC segmentation [4] is prepared based on pre-operative imaging data. As the first intraoperative scan is available, preoperative data is rigidly (i.e., using translation and rotation) aligned with the patient’s head position; 2. with the acquisition of the intraoperative image showing brain deformation, sparse displacement field is estimated from the intra-operative scan using blockmatching [1]. Its computation is based on the minimization of

982

N. Chrisochoides et al.

the correlation coefficient between regions, or blocks, of the pre-operative (aka floating) image and the real-time intra-operative (aka fixed) image; 3. the FEM model of the intra-cranial cavity with linear elastic constitutive equation is initialized with the mesh and sparse displacement field as the initial condition. An iterative hybrid method is used to discard the outlier matches. Steps 2 and 3 are time critical and should be performed as the surgeons are waiting. In the context of the application we define the response time as the time between the acquisition of the intra-operative scan of the deformed tissue and the final visualization of the registered preoperative data on the console in the operating room. These steps performed intraoperatively form the Dynamic Data-Driven Application System (DDDAS 1 ) steered by the IMRI-acquired data. Our broad objective is to minimize the perceived (end-to-end) response time of the DDDAS component. 2.2

Implementation Objectives

We have completed an evaluation of the initial PVM implementation [1] of the described registration approach. The evaluation was done on retrospective datasets obtained during past image-guided neurosurgeries. We identified the following problems: 1. The execution time of the original non-rigid registration code is data-dependent and varies between 30 and 50 minutes, when computed on a high-end 4 CPU workstation. The scalability of the code is very poor due to work-load imbalances. 2. The code is designed as a single monolithic step (since it was not evaluated in the intraoperative mode) to run and a single failure at any point requires to restart the registration from the beginning. 3. The original code is not intuitive to use, a number of implementation-specific parameters are required to be set in the command line. This makes it cumbersome and error-prone to use during neurosurgery. The possible critical delays are exacerbated in the case the code has to run remotely on larger Clusters of Workstations (CoWs). 4. The original code is implemented in PVM which is not supported by many sites due to widespread use of MPI standard for message passing. Based on the evaluation of the original code, the following implementation objectives were identified: High-performance. Develop an efficient and portable software environment for parallel and distributed implementation of real-time non-rigid registration method for both small scale parallel machines and large scale geographically distributed CoWs. The implementation should be able to work on both dedicated, and time-shared resources. 1

The notion of DDDAS was first coined and advocated by Dr.Darema, see http://dddas.org.

Grid-Enabled Software Environment

 

('  



  

 



  

983



   

))), 

 

$%&'("!(")'*(     

$+%&'(' !#  ( 

!   "  " 



 !  "   "  " "  

+%&'(' !# 

!# 

!# 

!# 

!# 

      ! " " !   !



)'*("!( !   (" $%&' ( ! )'*("  

Fig. 1. Timeline of the image processing steps during IGNS (the client is running at BWH, and the server is using multiple clusters at CWM, for fault-tolerance purposes)

Quality-of-service (QoS). Provide functionality not only to sustain failure but also to dynamically replace/reallocate faulty resources with new ones during the real-time data acquisition and computation. Ease-of-use. Develop a GUI which automatically will handle exceptions (e.g., faults, resource management, and network outages). We have developed an implementation which addresses the aforementioned objectives [5]. Next we briefly highlight some of the implementation details. 2.3

Implementation Details

Multi-level Distributed Block Matching. In order to find a match for a given block, we need the block center coordinates, and the areas of the fixed and floating images bounded by the block matching window [1]. The fixed and floating images are loaded on each of the processors during the initialization step, as shown in Fig. 1. The total workload is maintained in a work-pool data structure.

984

N. Chrisochoides et al.

Each item of the work-pool contains the three coordinates of the block center (total number of blocks for a typical dataset is around 100,000), and the best match found for that block (in case the block was processed; otherwise that field is empty). We use the master-worker computational model to distribute the work among the processors. However, because of the scarce resource availability we have to be able to deal with computational clusters which belong to different administrative domains. In order to handle this scenario, we use hierarchical multi-level organization of the computation with master-worker model. We use a separate master node within each cluster. Each master maintains a replica of the global work-pool, and is responsible for distributing the work according to the requests of the nodes within the assigned cluster, and communicating the execution progress to the other master(s). Multi-level Dynamic Load Balancing. The imbalance of the processing time across different nodes involved in the computation is caused by our inability or difficulty to predict the time required per block of data on a given architecture. The main sources of load imbalance are platform-dependent. These are caused by the heterogeneous nature of the PEs we use. More importantly, some of the resources may be time-shared by multiple users and applications, which affect the processing time in an unpredictable manner. The (weighted-) static work assignment of any kind is not effective when some of the resources operate in the time-shared mode. We have implemented a multi-level hierarchical dynamic load balancing scheme for parallel block matching. We use initial rough estimation of the combined computational power of each cluster involved in the computation (based on CPU clock speed) for the weighted partitioning of the work-pool and initial assignment of work. However, this is a rough “guess” estimation, which is adjusted at runtime using a combination of master/worker and work-stealing [6,7] methods. Each master has a copy of the global work-pool, which are identical in the beginning of the computation. The portion of the work-pool assigned to a specific cluster is partitioned in meta-blocks (a sequence of blocks), which are passed to the cluster nodes using the master-worker model. As soon as all the matches for a meta-block are computed, they are communicated back to the master, and a new meta-block is requested. In case the portion of the work-pool assigned to a master is processed, the master continues with the “remote” portions of work (i.e., those, initially assigned to other clusters). As soon as the processing of a “remote” meta-block is complete, it is communicated to all the other master nodes to prevent duplicated computation. Multi-level Fault Tolerance. Our implementation is completely decoupled, which provides the first level of fault tolerance, i.e., if the failure takes place at any of the stages, we can seamlessly restart just the failed phase of the algorithm and recover the computation. The second level of fault tolerance concerns with the parallel block matching phase. It is well-known that the vulnerability of parallel computations to hardware failures increases as we scale the size of the system.

Grid-Enabled Software Environment

985

We would like to have a robust system which in case of failure would be able to continue the parallel block matching without recomputing results obtained before the failure. This functionality is greatly facilitated by maintaining the previously described work-pool data-structure which is maintained on by the master nodes. The work-pool data-structure is replicated on the separate file-systems of these clusters, and has a tuple for each of the block centers. A tuple can be either empty, if the corresponding block has not been processed, or otherwise it contains the three components of the best match for a given block. The work-pool is synchronized periodically between the two clusters, and within each cluster it is updated by the PEs involved. As long as one of the clusters involved in the computation remains operational, we will be able to sustain the failure of the other computational side and deliver the registration result. Ease-of-Use. The implementation consists of the client and server components. The client is running at the hospital site, and is based on a Web-service, which makes it highly portable and easy to deploy. On the server side, the input data and arguments are transferred to the participating sites. Currently, we have a single server responsible for this task. The computation proceeds using the participating available remote sites to provide the necessary performance and fault-tolerance. Table 1. Execution time (sec) of the intra-surgery part of the implemented web-service at various stages of development Setup

ID 1 2 3 4 5 6 7 High-end workstation, using original 1558 1850 2090 2882 2317 2302 3130 PVM implementation SciClone (240 procs), 745 639 595 617 570 550.4 1153 no load-balancing SciClone (240 procs) and CS lab(29 procs), dynamic 2-level 30 40 42 37 34 33 35 load-balancing and fault-tolerance

3

Initial Evaluation Results

Our preliminary results use seven image datasets acquired at BWH. The computations for two of these seven registration computations were accomplished during the course of surgery (at the College of William and Mary), while the rest of the computations were done retrospectively. All of the intra-operative computations utilized SciClone (a heterogeneous cluster of workstations located at CWM, reserved in advance for the registration computation) and the workstations of the student lab (time-shared mode). The details of the hardware configuration can be found in [5]. Data transfer between the networks of CWM

986

N. Chrisochoides et al.

15 14 13 12 11

error, mm

10 9 8 7 6 5 4 3 2 1 0

1

2

3

4

5

7 6 landmark ID

8

9

10

11

Fig. 2. Registration accuracy dependence on proper parameter selection (left). Excellent scalability of the code on the NCSA site of TeraGrid (right) enables intraoperative search for optimal parameters.

and BWH (subnet of Harvard University) are facilitated by the Internet2 backbone network with the slowest link having bandwidth of 2.5 Gbps. The initial evaluation results are summarized in Table 1. We were able to reduce the total response time to 2 minutes (4 minutes, including the time to transfer the data). We showed, that dynamic load balancing is highly effective in time-shared environment. Modular structure of the implemented code greatly assisted in the overall usability and reliability of the code. The fault-tolerance mechanisms implemented are absolutely essential and introduce a mere 5-10% increase in the execution time. We have also evaluated our implementation on the Mercury nodes of the NCSA TeraGrid site [8]. The 64-bit homogeneous platform available at NCSA allows for high sustained computational power and improved scalability of the code (see Fig. 2).

4

Discussion

The registration algorithm we implemented has a number of parameters whose values can potentially affect the accuracy of the results. The evaluation of all parameters is computationally demanding (the parameter space of the algorithm has high dimensionality), which requires vast computational resources that are available only over the Grid. Based on the preliminary analysis, registration accuracy is dependent on the parameter selection. Fig. 2 shows the spread of registration precision at expert-identified anatomical landmarks. Given the distributed resources available within TeraGrid , we should be able to compute in parallel registration results which use different parameter settings. As the multiple registrations become available, the surgeon will specify the area of interest within the brain, and the registration image which gives the best effective accuracy in that particular region will be selected. The implemented framework proved to be very effective during on-going clinical study on nonrigid registration. However, more work needs to be done to make the framework portable and easy to deploy on an arbitrary platform. Once this

Grid-Enabled Software Environment

987

is complete, the registration can be provided as a ubiquitously-available Web service. The concerns about resource allocation and scheduling on a shared resource like TeraGrid are of high importance. The presented research utilized time-shared resources together with a large cluster operating in dedicated mode. However, we are currently investigating other opportunities, e.g., SPRUCE and urgent computing [9] on TeraGrid . Acknowledgments. This work was performed in part using computational facilities at the College of William and Mary which were enabled by grants from Sun Microsystems, the National Science Foundation, and Virginia’s Commonwealth Technology Research Fund. We thank SciClone administrator Tom Crockett for his continuous support and personal attention to this project. We acknowledge support from a research grant from CIMIT, grant RG 3478A2/2 from the NMSS, and by NIH grants R21 MH067054, R01 RR021885, P41 RR013218, U41 RR019703, R03 EB006515 and P01 CA067165.

References 1. Clatz, O., Delingette, O., Talos, I.F., Golby, A., Kikinis, R., Jolesz, F., Ayache, N., Warfield, S.K.: Robust non-rigid registration to capture brain shift from intraoperative MRI. IEEE Trans. Med. Imag. 24(11) (2005) 1417–1427 2. Barker, K., Chernikov, A., Chrisochoides, N., Pingali, K.: A load balancing framework for adaptive and asynchronous applications. IEEE TPDS 15(2) (February 2004) 183–192 3. Fedorov, A., Chrisochoides, N.: Location management in object-based distributed computing. In: Proc. of IEEE Cluster’04. (2004) 299–308 4. Archip, N., Clatz, O., Whalen, S., Kacher, D., Fedorov, A., Kot, A., Chrisochoides, N., Jolesz, F., Golby, A., Black, P.M., Warfield, S.K.: Non-rigid alignment of preoperative MRI, fMRI, and DT-MRI with intra-operative MRI for enhanced visualization and navigation in image-guided neurosurgery. NeuroImage (2007) (in press). 5. Chrisochoides, N., Fedorov, A., Kot, A., Archip, N., Black, P., Clatz, O., Golby, A., Kikinis, R., Warfield, S.K.: Toward real-time image guided neurosurgery using distributed and Grid computing. In: Proc. of IEEE/ACM SC06. (2006) 6. Blumofe, R., Joerg, C., Kuszmaul, B., Leiserson, C., Randall, K., Zhou, Y.: Cilk: An efficient multithreaded runtime system. In: Proceedings of the 5th Symposium on Principles and Practice of Parallel Programming. (1995) 55–69 7. Wu, I.: Multilist Scheduling: A New Parallel Programming Model. PhD thesis, School of Comp. Sci., Carnegie Mellon University, Pittsburg, PA 15213 (July 1993) 8. TeraGrid Project: TeraGrid Home page (2006) http://teragrid.org/, accessed 23 April 2006. 9. Beckman, P., Nadella, S., Trebon, N., Beschastnikh, I.: SPRUCE: A system for supporting urgent high-performance computing. In: Proc. of WoCo9: Grid-based Problem Solving Environments. (2006)

From Data Reverence to Data Relevance: Model-Mediated Wireless Sensing of the Physical Environment Paul G. Flikkema1 , Pankaj K. Agarwal2, James S. Clark2 , Carla Ellis2 , Alan Gelfand2 , Kamesh Munagala2 , and Jun Yang2 1

Northern Arizona University, Flagstaff AZ 86001 USA 2 Duke University, Durham, NC USA

Abstract. Wireless sensor networks can be viewed as the integration of three subsystems: a low-impact in situ data acquisition and collection system, a system for inference of process models from observed data and a priori information, and a system that controls the observation and collection. Each of these systems is connected by feedforward and feedback signals from the others; moreover, each subsystem is formed from behavioral components that are distributed among the sensors and outof-network computational resources. Crucially, the overall performance of the system is constrained by the costs of energy, time, and computational complexity. We are addressing these design issues in the context of monitoring forest environments with the objective of inferring ecosystem process models. We describe here our framework of treating data and models jointly, and its application to soil moisture processes. Keywords: Data Reverence,Data Relevance, Wireless Sensing.

1

Introduction

All empirical science is based on measurements. We become familiar with these quantitative observations from an early age, and one indication of our comfort level with them is the catchphrase “ground truth”. Yet one characteristic of the leading edge of discovery is the poor or unknown quality of measurements, since the instrumentation technology and the science progress simultaneously, taking turns pulling each other forward in incremental steps. Wireless sensor networking is a new instrument technology for monitoring of a vast range of environmental and ecological variables, and is a particularly appropriate example of the interleaving of experiment and theory. There are major ecological research questions that must be treated across diverse scales of space and time, including the understanding of biodiversity and the effects on it of human activity, the dynamics of invasive species (Tilman 2003), and identification of the web of feedbacks between ecosystems and global climate change. Wireless sensor networks have great potential to provide the data to help answer these questions, but they are a new type of instrumentation with Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 988–994, 2007. c Springer-Verlag Berlin Heidelberg 2007 

From Data Reverence to Data Relevance

989

substantial constraints: the usual problems of transducer noise, nonlinearities, calibration, and sensitivity to temperature and aging are compounded by numerous potential sensor and network failure modes and intrinsically unreliable multihop data transmissions. Moreover, the entire measurement and networking enterprise is severely constrained by limited power and energy. There is substantial redundancy in data collected within wireless networks (Fig. 1). Yet the capacity to collect dense data when it provides valuable information is one of the key motivations for the technology. Clearly, there is need to control the measurement process with model-based evaluation of potential observations.

Fig. 1. Examples of four variables measured in the Duke Forest wireless sensor network showing different levels of redundancy at different scales

Measurements without underlying data and process models are of limited use in this endeavor. Indeed, most useful measurements, even when noiseless and unbiased, are still based on an underlying model, as in processing for sensors and satellite imagery (Clark et al. 2007). These often-implicit or even forgotten

990

P.G. Flikkema et al.

models have limitations that are bound up in their associated data. For example, a fundamental operation in environmental monitoring is sampling in space and time, and one approach is to estimate temporal and spatial bandwidths to establish sampling rates. However, the usual frame of reference in this case is the classic Shannon sampling theorem, and the requirement of finite bandwidth in turn forces definition of the signal over all time, a clear model-based limitation. The phenomena of interest in environmental monitoring are highly timevarying and non-stationary, and laced with measurement and model uncertainty. These factors are the key motivation for the application of the dynamic distributed data application systems (DDDAS) paradigm to wireless sensor networks (Flikkema et al. 2006). DDDA systems are characterized by the coupling of the concurrent processes of data collection and model inference, with feedbacks from one used to refine the other. The fact that resources—energetic and economic— are limited in wireless sensor networks is in some sense an opportunity. Rather than accept measurements as the gold standard, we should embrace the fact that both measurements and models can be rife with uncertainty, and then tackle the challenge of tracking and managing that uncertainty through all phases of the project: transducer and network design; data acquisition, transfer, and storage; model inference; and analysis and interpretation.

2

Dynamic Control of Network Activity

Looking at two extreme cases of data models—strong spatial correlation combined with weak local correlation and vice versa—can shed some light on the trade-offs in designing algorithms that steer network activity. First, consider the case when the monitored process is temporally white but spatially coherent. This could be due to an abrupt global (network-wide) change, such as the onset of a rainstorm in the monitoring of soil moisture. In this case, we need snapshots at the natural temporal sampling rate, but only from a few sensor nodes. Data of the needed fidelity can then be obtained using decentralized protocols, such as randomized protocols that are simple and robust (Flikkema 2006). Here, the fusion center or central server broadcasts a cue to the nodes in terms of activity probabilities. The polar opposite is when there is strong temporal coherence but the measurements are statistically independent in the spatial domain. One example of this is sunfleck processes in a forest stand with varying canopy density. Since most sensor nodes should report their measurements, but infrequently, localized temporal coding schemes can work well. Our overall effort goes beyond data models to the steering of network activity driven by ecosystem process models, motivated by the fact that even though a measured process may have intrinsically strong dynamics (or high bandwidth), it may be driving an ecosystem process that is a low-pass filter, so that the original data stream is strongly redundant with respect to the model of interest. Our approach is to move toward higher-level modeling that reveals what data is important.

From Data Reverence to Data Relevance

991

A common criticism might arise here: what if the model is wrong? First, given the imprecision and unreliability of data, there is no a priori reason to favor data. For example, we often reject outliers in data preprocessing, which relies on an implicit ”reasonableness” model. Yet an outlier could be vital information. Thus any scheme must dynamically allocate confidence between the model and the incoming data. By using a Bayesian approach, this allocation can be made in a principled, quantitative manner (Clark 2005, Clark 2007, MacKay 2003). Any algorithm that uses model-steered sampling and reporting (rather than resorting to fixed-rate sampling at the some maximum rate) will make errors with a non-zero probability. To mitigate these errors, our strategy is based concurrent execution of the same models in the fusion center as in individual sensors. Using this knowledge, the fusion center can estimate unreported measurements; the reliability of these estimates is determined by the allowed departure of the predicted value from the true value known by the sensing node. The fusion center can also run more sophisticated simulation-based models that would be infeasible in the resource-constrained sensors, and use the results to broadcast model-parameter updates. Clearly, a missing report could be due to a communication or processing error rather than a decision by the sensor. By carefully controlling redundancy within the Bayesian inference framework, which incorporates models for both dynamic reporting and failure statistics (Silberstein et al. 2007), it become possible to infer not only data and process models, but node- and network-level failure modes as well. Finally, in our experiments, each sensor node archives its locally acquired data in non-volatile memory, allowing collection of reference data sets for analysis.

3

Example: Soil Moisture Processes

Soil moisture is a critical ecosystem variable since it places a limit on the rate of photosynthesis and hence plant growth. It is a process parameterized by soil type and local topography and driven by precipitation, surface runoff, evapotranspiration, and subsurface drainage processes. Because it is highly non-linear, it is much more accessible to Bayesian approaches than ad hoc inverse-modeling techniques. Bayesian techniques permit integration of process noise that characterizes our level of confidence in the model. In practice, it may more productive to use a simple model with fewer state variables and process noise instead of a model of higher dimension with poorly known sensitivity to parameter variations. Once the model is obtained (for example, using training data either from archival data or a “shake-out” interval in the field), the inferred parameters can then be distributed to the sensor nodes. The nodes then use the model as a predictor for new measurements based on the past. The observation is transmitted only when the discrepancy between the measurement and the predicted value exceeds a threshold (again known to both the sensor and the fusion center). Finally, the model(s) at the fusion center are used to recover the unreported changes.

992

P.G. Flikkema et al.

Fig. 2. a) Simulated soil moisture data (solid line) and simulated observations (colored lines) from sensors that drift. b) Posterior estimates of parameter drift become part of the model that is used to determine which observations to collect (Fig. 3).

In the simulation results shown in Figure 2 (Clark et al. 2007), the underlying soil moisture is shown as a solid black line. Here we use a purely temporal model in each sensor node. Five sensors are shown in different colors, with calibration data as red dots. To emphasize that the approach does not depend on a flawless network, we assume that all sensors are down for a period of time. The 95% predictive intervals for soil moisture (dashed lines) show that, despite sensor drift and even complete network failure, soil moisture can be accurately predicted. For this particular example, the estimates of drift parameters are somewhat biased (Figure 2b), but these parameters are of limited interest, and have limited impact on predictive capacity (Figure 2a). The impact on reporting rate and associated energy usage is substantial as well (Clark et al. 2007). Our strategy is to incorporate dynamic reporting starting with simple, local models in an existing wireless sensor network architecture (Yang 2005). As shown by the soil moisture example, even purely temporal models can have a significant impact. From a research standpoint, it will be useful to first determine the effectiveness of dynamic reporting driven by a local change-based model where a node reports an observation only if it has changed from the previously reported observation by a specified absolute amount. This is simple to implement and requires a negligible increase in processing time and energy. In general, local

From Data Reverence to Data Relevance

993

Fig. 3. A simple process model, with field capacity and wilting point, together with a data model that accommodated parameter drift (Fig. 2b) allows for transmission of only a fraction of the data (solid dots in (a)). Far more of the measurements are suppressed (b), because they can be predicted.

models have the advantage of not relying on collaboration with other sensor nodes and its associated energy cost of communication. What about the problem of applying one model for data collection and another for modeling those data in the future? It is important to select models for data collection that emphasize data predictability, rather than specific parameters to be estimated. For example, wilting point and field capacity are factors that make soil moisture highly predictable, their effects being evident in Figures 1 and 2. By combining a process model that includes just a few parameters that describe the effect of field capacity and wilting point and a data model that includes sensor error, the full time series can be reconstructed based on a relatively small number of observations (Figure 3)(Clark et al. 2007).

4

Looking Ahead

Researchers tend to make an observation, find the most likely value, and then treat it as deterministic in all subsequent work, with uncertainty captured only in process modeling. We have tried to make the case here for a more holistic approach that captures uncertainty in both data and models, and uses a framework to monitor and manage that uncertainty. As wireless sensor network

994

P.G. Flikkema et al.

deployments become larger and more numerous, researchers in ecology and the environmental sciences will become inundated with massive, unwieldy datasets filled with numerous flaws and artifacts. Our belief is that much of this data may be redundant, and that many of the blemishes may be irrelevant from the perspective of inferring predictive models of complex, multidimensional ecosystems processes. Since the datasets will consume a great deal of time and effort to document, characterize, and manage, we think that that the time for model-mediated sensing has arrived.

References 1. NEON: Addressing the Nation’s Environmental Challenges. Committee on the National Ecological Observatory Network (G. David Tilman, Chair), National Research Council. 2002. ISBN: 0-309-09078-4. 2. Clark, J.S. Why environmental scientists are becoming Bayesians. Ecol. Lett. 8:2-14, 2005. 3. Clark, J.S. Models for Ecological Data: An Introduction. Princeton University Press, 2007. 4. Clark, J.S., Agarwal, P., Bell, D., Ellis, C., Flikkema, P., Gelfand, A., Katul, G., Munagala, K., Puggioni, G., Silberstein, A., and Yang, J. Getting what we need from wireless sensor networks: a role for inferential ecosystem models. 2007 (in preparation). 5. Flikkema, P. The precision and energetic cost of snapshot estimates in wireless sensor networks. Proc. IEEE Symposium on Computing and Communications (ISCC 2006), Pula-Cagliari, Italy, June 2006. 6. Flikkema, P., Agarwal, P., Clark, J.S., Ellis, C., Gelfand, A., Munagala, K., and Yang, J. Model-driven dynamic control of embedded wireless sensor networks. Workshop on Dynamic Data Driven Application Systems, International Conference on Computational Science (ICCS 2006), Reading, UK, May 2006. 7. MacKay, D.J.C. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003. 8. Silberstein, A., Braynard, R., Filpus, G., Puggioni, G., Gelfand, A., Munagala, K., and Yang, J. Data-driven processing in sensor networks. Proc. 3rd Biennial Conference on Innovative Data Systems Research (CIDR ’07), Asilomar, California, USA, January 2007. 9. Yang, Z., et al. WiSARDNet: A system solution for high performance in situ environmental monitoring. Second International Workshop on Networked Sensor Systems (INSS 2005), San Diego, 2005.

AMBROSia: An Autonomous Model-Based Reactive Observing System David Caron, Abhimanyu Das, Amit Dhariwal, Leana Golubchik, Ramesh Govindan, David Kempe, Carl Oberg, Abhishek Sharma, Beth Stauffer, Gaurav Sukhatme, and Bin Zhang University of Southern California, Los Angeles, CA 90089  [email protected]

Abstract. Observing systems facilitate scientific studies by instrumenting the real world and collecting corresponding measurements, with the aim of detecting and tracking phenomena of interest. Our AMBROSia project focuses on a class of observing systems which are embedded into the environment, consist of stationary and mobile sensors, and react to collected observations by reconfiguring the system and adapting which observations are collected next. In this paper, we report on recent research directions and corresponding results in the context of AMBROSia.

1 Introduction Observing systems facilitate scientific studies by instrumenting the real world and collecting measurements, with the aim of detecting and tracking phenomena of interest. Our work focuses Reactive Observing Systems (ROS), i.e., those that are (1) embedded into the environment, (2) consist of stationary and mobile sensors, and (3) react to collected observations by reconfiguring the system and adapting which observations are collected next. The goal of ROS is to help scientists verify or falsify hypotheses with useful samples taken by the stationary and mobile units, as well as to analyze data autonomously to discover interesting trends or alarming conditions. We explore ROS in the context of a marine biology application, where the system monitors, e.g., water temperature and light as well as concentrations of micro-organisms and algae in a body of water. Current technology (and its realistic near future prediction) precludes sampling all possibly relevant data: bandwidth limitations between stationary sensors make it impossible to collect all sensed data, and time & storage capacity constraints for mobile entities curtail the number and locations of samples they can take. To make good use of limited resources, we are developing a framework capable of optimizing and controlling the set of samples to be taken at any given time, taking into consideration the 



This research has been funded by the NSF DDDAS 0540420 grant. It has also been funded in part by the NSF Center for Embedded Networked Sensing Cooperative Agreement CCR0120778, the National Oceanic and Atmospheric Administration Grant NA05NOS47812228, and the NSF EIA-0121141 grant. Contact author.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 995–1001, 2007. c Springer-Verlag Berlin Heidelberg 2007 

996

D. Caron et al.

application’s objectives and system resource constraints. We refer to this framework as AMBROSia (Autonomous Model-Based Reactive Observing System). In [7] we give an overview of the AMBROSia framework as well as the experimental system setting and the corresponding marine biology application. In this paper we report on our recent research directions and corresponding results, in the context of AMBROSia. As already noted, one of the core functionalities of AMBROSia is the selection of samples which (1) can be retrieved at reasonably low energy cost, and (2) yield as much information as possible about the system. The second property in particular will change dynamically: in reaction to past measurements, different observations may be more or less useful in the future. At any given time, the system must select the most informative samples to retrieve based on the model at that point. We briefly outline a mathematical formulation of this problem and results to date in Section 2. In ROS accurate measurements are useful to scientists seeking a better understanding of the environment. However, it may not be feasible to move the static sensor nodes after deployment. In such cases, mobile robots could be used to augment the static sensor network, hence forming a robotic sensor network. In such networks, an important question to ask is how to coordinate the mobile robots and the static nodes such that estimation errors are minimized. Our recent efforts on addressing this question are briefly outlined in Section 3. The successful use of ROS, as envisioned in AMBROSia, partly depends on the system’s ability to ensure the collected data’s quality. However, various sensor network measurement studies have reported transient faults in sensor readings. Thus, another important goal in AMBROSia is automated high-confidence fault detection, classification, and data rectification. As a first step towards that goal, we explore and characterize several qualitatively different classes of fault detection methods, which are briefly outlined in Section 4. Our concluding remarks are given in Section 5.

2 A Mathematical Formulation of Sample Selection Mathematically, our sample selection problem can be modeled naturally as a subset selection problem for regression: Based on the small number of measurements Xi taken, a random variable Z (such as average temperature, chlorophyll concentration, growth of algae, etc.) is to be estimated as accurately as possible. Different measurements Xi , X j may be partially correlated, and thus partially redundant, a fact that should be deduced from past models. In a pristine and abstract form, the problem can thus be modeled as follows: We are given a covariance matrix C between the random variables Xi , and a vector b describing covariances between measurements Xi and the quantity Z to be predicted (C and b are estimated based on the model). In order to keep the energy sampling cost small, the goal is to find a small set S (of size at most k) so as to minimize the mean squared prediction error [4,8] Err(Z, S) := E[(Z − ∑i∈S αi Xi )2 ], where the αi are the optimal regression coefficients specifically for the set S selected. The selection problem thus gives rise to the well-known subset selection problem for regression [10], which has traditionally had many applications in medical and social

AMBROSia: An Autonomous Model-Based Reactive Observing System

997

studies, where the set S is interpreted as a good predictor of Z. Finding the best set S of size k is NP-hard, and certain approximation hardness results are known [2,11]. However, despite its tremendous importance to statistical sciences, very little was known in terms of approximation algorithms until recent results by Gilbert et al. [6] and Tropp [17] established approximation guarantees for the very special case of nearly independent Xi variables. In ongoing work, we are investigating several more general cases of the subset selection problem for regression, in particular with applications to selecting samples to draw in sensor network environments. Over the past year, we have obtained the following key results (which are currently under submission [1]): Theorem 1. If the pairwise covariances between the Xi are small (at most 1/6k, if k variables can be selected), then the frequently used Forward Regression heuristic is a provably good approximation. The quality of approximation is characterized precisely in [1], but omitted here due to space constraints. This result improves on the ones of [6,17], in that it analyzes a more commonly used algorithm, and obtains somewhat improved bounds. The next theorem extends the result to a significantly wider class of covariance matrices, where several pairs can have higher covariances. Theorem 2. If the pairs of variables Xi with high covariance (exceeding Ω (1/4k)) form a tree, then a provably good approximation can be obtained in polynomial time using rounding and dynamic programming. While this result significantly extends the cases that can be approximated, it is not directly relevant to measuring physical phenomena. Hence, we also study the case of sensors embedded in a metric space, where the covariance between sensors’ readings is a monotone decreasing function of their distance. The general version of this problem is the subject of ongoing work, but [1] contains a promising initial finding: Theorem 3. If the sensors are embedded on a line (in one dimension), and the covariance decreases roughly exponentially in the distance, then a provably good approximation can be obtained in polynomial time. The algorithm is again based on rounding and a different dynamic program, and makes use of some remarkable properties of matrix inverses for this type of covariance matrix. At the moment, we are working on extending these results to more general metrics (in particular, two-dimensional Euclidean metrics), and different dependencies of covariances on the distance.

3 Scalar Field Estimation Sensor networks provide new tools for observing and monitoring the environment. In aquatic environments, accurately measuring quantities such as temperature, chlorophyll, salinity, and concentration of various nutrients is useful to scientists seeking a better understanding of aquatic ecosystems, as well as government officials charged with ensuring public safety via appropriate hazard warning and remediation measures.

998

D. Caron et al.

Broadly speaking, these quantities of interest are scalar fields. Each is characterized by a single scalar quantity which varies spatiotemporally. Intuitively, the more the readings near the location where a field estimate is desired, the less the reconstruction error. In other words, the spatial distribution of the measurements (the samples) affects the estimation error. In many cases, it may not be feasible to move the static sensor nodes after deployment. In such cases, one or more mobile robots could be used to augment the static sensor network, hence forming a sensor-actuator network or a robotic sensor network. The problem of adaptive sampling: An immediate question to ask is how to coordinate the mobile robots and the static nodes such that the error associated with the estimation on the scalar field is minimized subject to the constraint that the energy available to the mobile robot(s) is bounded. Specifically, if each static node makes a measurement in its vicinity, and the total energy available to the mobile robot is known, what path should the mobile robot take to minimize the mean square integrated error associated with the reconstruction of the entire field? Here we assume that the energy consumed by communications and sensing is negligible compared to the energy consumed in moving the mobile robot. We also assume that the mobile robot can communicate with all the static nodes and acquire sensor readings from them. Finally, we focus on reconstructing phenomena which do not change temporally(or change very slowly compared to the time it takes the mobile robot to complete a tour of the environment). The domain: We develop a general solution to the above problem and test it on a particular set up designed to monitor an aquatic environment. The experimental set up is a systems of anchored buoys (the static nodes), and a robotic boat (the mobile robot) capable of measuring temperature and chlorophyll concentrations. This testbed is part of the NAMOS (Networked Aquatic Microbial Observing System) project (http://robotics.usc.edu/~namos), which is used in studies of microbial communities in freshwater and marine environments [3,15]. Contributions: We propose an adaptive sampling algorithm for a mobile sensor network consisting of a set of static nodes and a mobile robot tasked to reconstruct a scalar field. Our algorithm is based on local linear regression [13,5]. Sensor readings from static nodes (a set of buoys) are sent to the mobile robot (a boat) and used to estimate the Hessian Matrix of the scalar field (the surface temperature of a lake), which is directly related to the estimation error. Based on this information, a path planner generates a path for the boat such that the resulting integrated mean square error (IMSE) of the field reconstruction is minimized subject to the constraint that the boat has a finite amount of energy which it can expend on the traverse. Data from extensive (several km) traverses in the field as well as simulations, validate the performance of our algorithm. We are currently working on how to determine the appropriate resolution to discretize the sensed field. One interesting observation from the simulations and experiments is that when the initial available energy is increased, the estimation errors decrease rapidly and level off instead of decreasing to zero. Theoretically, when the energy available to the mobile node increases, more sensor readings can be taken and hence the estimation errors should keep decreasing. By examining the path generated

AMBROSia: An Autonomous Model-Based Reactive Observing System

999

by the adaptive sampling algorithm, we found that when the initial energy is enough for the mobile node to go through all the ‘important’ locations, increasing the initial energy does not have much effect on the estimation error. We plan to investigate advanced path planning strategies and alternative sampling design strategies in future work.

4 Faults in Sensor Data With the maturation of sensor network software, we are increasingly seeing longer-term deployments of wireless sensor networks in real world settings. As a result, research attention is now turning towards drawing meaningful scientific inferences from the collected data [16]. Before sensor networks can become effective replacements for existing scientific instruments, it is important to ensure the quality of the collected data. Already, several deployments have observed faulty sensor readings caused by incorrect hardware design or improper calibration, or by low battery levels [12,16]. Given these observations, and the realization that it will be impossible to always deploy a perfectly calibrated network of sensors, an important research direction for the future will be automated detection, classification, and root-cause analysis of sensor faults, as well as techniques that can automatically scrub collected sensor data to ensure high quality. A first step in this direction is an understanding of the prevalence of faulty sensor readings in existing real-world deployments. We focus on a small set of sensor faults that have been observed in real deployments: single-sample spikes in sensor readings (SHORT faults), longer duration noisy readings (NOISE faults), and anomalous constant offset readings (CONSTANT faults). Given these fault models, our work makes the following two contributions. −4

−3

x 10

1 Detected False Negative False Positive

5

4

3

2

1

x 10

SHORT Faults

Fraction of Samples with Faults

Fraction of Samples with Faults

6

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1

0

R L

I

V

R L

I

V

R L

I

V

0

101 103 109 111 116 118 119 121 122 123 124 125 126 129 900

Light (left), Humidity (Center), Pressure (Right)

(a) SHORT Faults

(b) SHORT Faults: Light Sensor Fig. 1. GDI data set

Detection Methods. We have explored three qualitatively different techniques for automatically detecting such faults from a trace of sensor readings. Rule-based methods leverage domain knowledge to develop heuristic rules for detecting and identifying faults. Linear Least-Squares Estimation (LLSE) based methods predict “normal” sensor behavior by leveraging sensor correlation, flagging deviations from the normal as

1000

D. Caron et al.

sensor faults. Finally, learning-based methods (based on Hidden Markov Models) are trained to statistically detect and identify classes of faults. Our findings indicate that these methods sit at different points on the accuracy/ robustness spectrum. While rule-based methods can detect and classify faults, they can be sensitive to the choice of parameters. By contrast, the LLSE method is a bit more robust to parameter choices but relies on spatial correlations and cannot classify faults. Finally, our learning method (based on Hidden Markov Models) is cumbersome, partly because it requires training, but it can fairly accurately detect and classify faults. We also explored hybrid detection techniques, which combine these three methods in ways that can be used to reduce false positives or false negatives, whichever is more important for the application. These results are omitted for brevity and the interested reader is referred to [14]. Evaluation on Real-World Datasets. We applied our detection methods to real-world data sets. Here, we present results from the Great Duck Island (GDI) data set [9], where we examine the fraction of faulty samples in a sensor trace. The predominant fault in the readings was of the type SHORT. We applied the SHORT rule, the LLSE method, and Hybrid(I) (a hybrid detection technique) to detect SHORT faults in light, humidity and pressure sensor readings. Figure 1(a) shows the overall prevalence (computed by aggregating results from all the 15 nodes) of SHORT faults for different sensors in the GDI data set. (On the x-axis of this figure, the SHORT rule’s label is R, LLSE’s label is L, and Hybrid(I)’s label is I.) The Hybrid (I) technique eliminates any false positives reported by the SHORT rule or the LLSE method. The intensity of SHORT faults was high enough to detect them by visual inspection of the entire sensor readings timeseries. This ground-truth is included for reference in the figure under the label V. It is evident from the figure that SHORT faults are relatively infrequent. They are most prevalent in the light sensor readings (approximately 1 fault every 2000 samples). Figure 1(b) shows the distribution of SHORT faults in light sensor readings across various nodes. (Here, node numbers are indicated on the xaxis.) SHORT faults do not exhibit any discernible pattern in the prevalence of these faults across different sensor nodes; the same holds for other sensors, but we have omitted the corresponding graphs for brevity.For results on other data sets, please refer to [14]. Our study informs the research on ensuring data quality. Even though we find that faults are relatively rare, they are not negligibly so, and careful attention needs to be paid to engineering the deployment and to analyzing the data. Furthermore, our detection methods could be used as part of an online fault diagnosis system, i.e., where corrective steps could be taken during the data collection process based on the diagnostic system’s results.

5 Concluding Remarks Overall, our vision for AMBROSia is that it will facilitate observation, detection, and tracking of scientific phenomena that were previously only partially (or not at all) observable and/or understood. In this paper we outlined results corresponding to some of our recent steps towards achieving this vision.

AMBROSia: An Autonomous Model-Based Reactive Observing System

1001

References 1. A. Das and D. Kempe. Algorithms for subset selection in regression, 2006. Submitted to STOC 2007. 2. G. Davis, S. Mallat, and M. Avellaneda. Greedy adaptive approximation. Journal of Constructive Approximation, 13:57–98, 1997. 3. Amit Dhariwal, Bin Zhang, Carl Oberg, Beth Stauffer, Aristides Requicha, David Caron, and Gaurav S. Sukhatme. Networked aquatic microbial observing system. In the Proceedings of the IEEE International Conference of Robotics and Automation (ICRA), May 2006. 4. G. Diekhoff. Statistics for the Social and Behavioral Sciences. Wm. C. Brown Publishers, 2002. 5. Jianqing Fan. Local linear regression smoothers and their minimax efficiences. The Annals of Statistics, 21(1):196–216, 1993. 6. A. Gilbert, S. Muthukrishnan, and M. Strauss. Approximation of functions over redundant dictionaries using coherence. In Proc. ACM-SIAM Symposiun on Discrete Algorithms, 2003. 7. Leana Golubchik, David Caron, Abhimanyu Das, Amit Dhariwal, Ramesh Govindan, David Kempe, Carl Oberg, Abhishek Sharma, Beth Stauffer, Gaurav Sukhatme, and Bin Zhang. A Generic Multi-scale Modeling Framework for Reactive Observing Systems: an Overview. In Proceedings of the Dynamic Data Driven Application Systems Workshop held with ICCS, 2006. 8. R. A. Johnson and D. W. Wichern. Applied Multivariate Statistical Analysis. Prentice Hall, 2002. 9. Alan Mainwaring, Joseph Polastre, Robert Szewczyk, and David Cullerand John Anderson. Wireless Sensor Networks for Habitat Monitoring . In the ACM International Workshop on Wireless Sensor Networks and Applications. WSNA ’02, 2002. 10. A. Miller. Subset Selection in Regression. Chapman and Hall, second edition, 2002. 11. B. Natarajan. Sparse approximation solutions to linear systems. SIAM Journal on Computing, 24:227–234, 1995. 12. N. Ramanathan, L. Balzano, M. Burt, D. Estrin, E. Kohler, T. Harmon, C. Harvey, J. Jay, S. Rothenberg, and M. Srivastava. Rapid Deployment with Confidence: Calibration and Fault Detection in Environmental Sensor Networks. Technical Report 62, CENS, April 2006. 13. D. Ruppert and M. P. Wand. Multivariate locally weighted least squares regression. The Annals of Statistics, 22(3):1346–1370, 1994. 14. A. Sharma, L. Golubchik, and R. Govindan. On the Prevalence of Sensor Faults in Real World Deployments. Technical Report 07-888, Computer Science, University of Southern California, 2007. 15. Gaurav S. Sukhatme, Amit Dahriwal, Bin Zhang, Carl Oberg, Beth Stauffer, and David Caron. The design and development of a wireless robotic networked aquatic microbial observing system. Environmental Engineering Science, 2007. 16. Gilman Tolle, Joseph Polastre, Robert Szewczyk, David Culler, Neil Turner, Kevin Tu, Stephen Burgess, Todd Dawson, Phil Buonadonna, David Gay, and Wei Hong. A Macroscope in the Redwoods. In SenSys ’05: Proceedings of the 2nd international conference on Embedded networked sensor systems, pages 51–63, New York, NY, USA, 2005. ACM Press. 17. J. Tropp. Topics in Sparse Approximation. PhD thesis, University of Texas, Austin, 2004.

Dynamically Identifying and Tracking Contaminants in Water Bodies Craig C. Douglas1,2 , Martin J. Cole3 , Paul Dostert4 , Yalchin Efendiev4 , Richard E. Ewing4 , Gundolf Haase5 , Jay Hatcher1 , Mohamed Iskandarani6 , Chris R. Johnson3 , and Robert A. Lodder7 University of Kentucky, Department of Computer Science, 773 Anderson Hall, Lexington, KY 40506-0046, USA [email protected] 2 Yale University, Department of Computer Science, P.O. Box 208285 New Haven, CT 06520-8285, USA [email protected] 3 University of Utah, Scientific Computing and Imaging Institute, Salt Lake City, UT 84112, USA {crj,mjc}@cs.utah.edu 4 Texas A&M University, Institute for Scientific Computation, 612 Blocker, 3404 TAMU, College Station, TX 77843-3404, USA richard [email protected], {dostert,efendiev}@math.tamu.edu 5 Karl-Franzens University of Graz, Mathematics and Computational Sciences, A-8010 Graz, Austria [email protected] 6 University of Miami, Rosenstiel School of Marine and Atmospheric Science, 4600 Rickenbacker Causeway, Miami, FL 33149-1098, USA [email protected] 7 University of Kentucky, Department of Chemistry, Lexington, KY, 40506-0055, USA [email protected] 1

Abstract. We present an overview of an ongoing project to build a DDDAS for identifying and tracking chemicals in water. The project involves a new class of intelligent sensor, building a library to optically identify molecules, communication techniques for moving objects, and a problem solving environment. We are developing an innovative environment so that we can create a symbiotic relationship between computational models for contaminant identification and tracking in water bodies and a new instrument, the Solid-State Spectral Imager (SSSI), to gather hydrological and geological data and to perform chemical analyses. The SSSI is both small and light and can scan ranges of up to about 10 meters. It can easily be used with remote sensing applications.

1

Introduction

In this paper, we describe an intelligent sensor and how we are using it to create a dynamic data-driven application system (DDDAS) to identify and track contaminants in water bodies. This DDDAS has applications to tracking polluters, Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1002–1009, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Dynamically Identifying and Tracking Contaminants in Water Bodies

1003

finding sunken vehicles, and ensuring that drinking water supplies are safe. This paper is a sequel to [1]. In Sec. 2, we discuss the SSSI. In Sec. 3, we discuss the problem solving environment that we have created to handle data to and from SSSI’s in the field. In Sec. 4, we discuss In Sec. 5, we state some conclusions.

2

The SSSI

Using a laser-diode array, photodetectors, and on board processing, the SSSI combines innovative spectroscopic integrated sensing and processing with a hyperspace data analysis algorithm [2]. The array performs like a small network of individual sensors. Each laser-diode is individually controlled by a programmable on board computational device that is an integral part of the SSSI and the DDDAS. Ultraviolet, visible, and near-infrared laser diodes illuminate target points using a precomputed sequence, and a photodetector records the amount of reflected light. For each point illuminated, the resulting reflectance data is processed to separate the contribution of each wavelength of light and classify the substances present. An optional radioactivity monitor can enhance the SSSI’s identification abilities. The full scale SSSI implementation will have 25 lasers in discrete wavelengths between 300 nm and 2400 nm with 5 rows of each wavelength, consume less than 4 Watts, and weigh less than 600 grams. For water monitoring in the open ocean, imaging capability is unnecessary. A single row of diodes with one diode at each frequency is adequate. Hence, power consumption of the optical system can be reduced to approximately one watt. Several prototype implementations of SSSI have been developed and are being tested at the University of Kentucky. These use an array of LEDs instead of lasers. The SSSI combines near-infrared, visible, and ultraviolet spectroscopy with a statistical classification algorithm to detect and identify contaminants in water. Nearly all organic compounds have a near-IR spectrum that can be measured. Near-infrared spectra consist of overtones and combinations of fundamental midinfrared bands, which makes near-infrared spectra a powerful tool for identifying organic compounds while still permitting some penetration of light into samples [3]. The SSSI uses one of two techniques for encoding sequences of light pulses in order to increase the signal to noise ratio: Walsh-Hadamard or Complementary Randomized Integrated Sensing and Processing (CRISP). In a Walsh-Hadamard sequence multiple laser diodes illuminate the target at the same time, increasing the number of photons received at the photo detector. The Walsh-Hadamard sequence can be demultiplexed to individual wavelength responses with a matrix-vector multiply [4]. Two benefits of generating encoding sequences by this method include equivalent numbers of on and off states for each sequence and a constant number of diodes in the on state at each resolution point of a data acquisition period.

1004

C.C. Douglas et al.

Fig. 1. SCIRun screen with telemetry module in forefront

CRISP encoding uses orthogonal pseudorandom codes with unequal numbers of on and off states. The duty cycle of each code is different, and the codes are selected to deliver the highest duty cycles at the wavelengths where the most light is needed and lowest duty cycle where the least light is needed to make the sum of all of the transmitted (or reflected) light from the samples proportional to the analyte concentration of interest.

3

Problem Solving Environment SCIRun

SCIRun version 3.0 [5,6], a scientific problem solving enironment, was released in late 2006. It includes a telemetry module based on [7], which provides a robust and secure set of Java tools for data transmission that assumes that a known broker exists to coordinate sensor data collection and use by applications. Each tool has a command line driven form plus a graphical front end that makes it so easy that even the authors can use the tools. In addition there is a Grid based tool that can be used to play back already collected data. We used Apple’s XGrid environment [8] (any Grid environment will work, however) since if someone sits down and uses one of the computers in the Grid, the sensors handled by that computer disappear from the network until the computer is idle again for a small period of time. This gives us the opportunity to develop fault tolerant methods for unreliable sensor networks. The clients (sensors or applications) can come and go on the Internet, change IP addresses, collect historical data, or just new data (letting the missed data fall on the floor). The tools were designed with disaster management [9] in mind and stresses ease of use when the user is under duress and must get things right immediately.

Dynamically Identifying and Tracking Contaminants in Water Bodies

1005

A new Socket class was added to SCIRun, which encapsulates the socket traffic and is used to connect and transfer data from the server. The client handshakes with the server, which is informed of an ip:port where the client can be reached, and then listens on that port. Periodically, or as the server has new data available, the server sends data to the listening client. The configuration for SCIRun was augmented to include libgeotiff [10]. SCIRun then links against this client and has its API available within the modules. This API can be used to extract the extra information embedded in the tiff tags in various supported formats. For example, position and scale information can be extracted so that the images can be placed correctly. To allow controller interfaces to be built for the SSSI, a simulation of the device has been written in Matlab. This simulation follows the structure of the firmware code and provides the same basic interface as the firmware device. Data files are used in place of the SSSI’s serial communication channel to simulate data exchange in software. Matlab programs are also provided to generate sample data files to aid in the development of Hadamard-Walsh and CRISP encodings for various SSSI configurations. The simulation also provides insight into the SSSI’s firmware by emulating the use of oversampling and averaging to increase data precision and demonstrating how the data is internally collected and processed. The simulation can be used for the development of interfaces to the SSSI while optimization and refinement of the SSSI firmware continues. SCIRun has a Matlab module so that we can pipe data to and from the SSSI emulator. As a result, we can tie together the data transfer and SSSI components easily into a system for training new users and to develop virtual sensor networks before deployment of a real sensor network in the field.

4

Accurate Predictions

The initial deployment of the sensor network and model will focus on estuarine regions where water quality monitoring is critical for human health and environmental monitoring. The authors will capitalize on an existing configuration of the model to the Hudson-Raritan Estuary to illustrate the model’s capabilities (see [1] for details). We will consider passive tracer driven by external sources: ∂C(x, t) − L(C(x, t)) = S(x, t), C(x, 0) = C 0 (x) x ∈ Ω, ∂t where C is the concentration of contaminant, S is a source term and L is linear operator for passive scalar (advection-diffusion-reaction). L involves the velocity field which is obtained via the forward model based on the two-dimensional Spectral Element Ocean Model (SEOM-2D). This model solves the shallow water equations and the details can be found in our previous paper [1]. We have developed the spectral element discretization which relies on relatively high degree (5-8th) polynomials to approximate the solution within flow equations. The main features of the spectral element method are: geometric flexibility due to its unstructured grids, its dual paths to convergence: exponential by increasing polynomial degree or algebraic via increasing the number of elements, dense

1006

C.C. Douglas et al.

computational kernels with sparse inter-element synchronization, and excellent scalability on parallel machines. We now present our methodology for obtaining improved predictions based on sensor data. For simplicity, our example is restricted to synthetic velocity fields. Sensor data is used to improve the predictions by updating the solution at previous time steps which is used for forecasting. This procedure consists of updating the solution and source term history conditioned to observations and reduces the computational errors associated with incorrect initial/boundary data, source terms, etc., and improves the predictions [11,12,13]. We assume that the source term can be decomposed into pulses at different time steps (recording times) and various locations. We represent time pulses by δk (x, t) which corresponds to contaminant source at the location x = xk . We seek the initial condition as a linear combination of some basis functions ND  0 C (x) ≈ C˜ 0 (x) = λi ϕ0i (x). We solve for each i, i=1

∂ϕi − L (ϕi ) = 0, ϕi (x, 0) = ϕ0i (x) . ∂t Thus, an approximation to the solution of given by C˜ (x, t) =

ND 

∂C − L (C) = 0, C (x, 0) = C 0 (x) is ∂t

λi ϕi (x, t) . To seek the source terms, we consider the

i=1

following basis problems ∂ψk − L (ψk ) = δk (x, t) , ψk (x, 0) = 0 ∂t for ψ and each k. Here, δk (x, t) represents unit source terms that can be used to approximate the actual source term. In general, δk (x, t) have larger support both in space and time in order to achieve accurate predictions. We denote the Nc for each k. Then the solution to our solution to this equation as {ψk (x, t)}k=1 original problem with both the source term and initial condition is given by C˜ (x, t) =

ND 

λi ϕi (x, t) +

i=1

Nc 

αk ψk (x, t) .

k=1

Thus, our goal is to minimize ⎡ 2 ⎤ Ns Nc ND    ⎣ F (α, λ) = αk ψk (xj , t) + λk ϕk (xj , t) − γj (t) ⎦ + j=1

k=1

k=1 Nc  k=1

ND

2 

2 κ ˜ k αk − β˜k + κ ˆ k λk − βˆk ,

(1)

k=1

where Ns denotes the number of sensors. If we denote N = Nc + Nd , μ = [α1 , · · · , αNc , λ1 , · · · , λND ], η (x, t) = [ψ1 , · · · , ψNc , ϕ1 , · · · , ϕND ],

Dynamically Identifying and Tracking Contaminants in Water Bodies

1007

κ1 , · · · , κ β = β˜1 , · · · , β˜Nc , βˆ1 , · · · , βˆND , and κ = [˜ ˜ Nc , κ ˆ1, · · · , κ ˆ ND ] then we want to minimize ⎡ 2 ⎤ Ns N N    2 ⎣ F ( μ) = μk ηk (xj , t) − γj (t) ⎦ + κk (μk − βk ) . j=1

k=1

k=1

This leads to solving the least squares problem Aμ = R where Amn =

N 

ηm (xj , t) ηn (xj , t) + δmn κm ,

j=1

and Rm =

N 

ηm (xj , t) γj (t) + κm βm .

j=1 t We can only record sensor values at some discrete time steps t = {tj }N j=1 . We want to use the sensor values at t = t1 to establish an estimate for μ, then use each successive set of sensor values to refine this estimate. After each step, we update and then solve using the next sensor value. Next, we present a representative numerical result. We consider contaminant transport on a flat surface, a unit dimensionless square, with convective velocity in the direction (1, 1). The source term is taken to be 0.25 in [0.1, 0.3] × [0.1, 0.3] for the time interval from t = 0 to t = 0.05. Initial condition is assumed to have the support over the entire domain. We derive the initial condition (solution at previous time step) by solving the original contaminant transport problem with some source terms assuming some prior contaminant history. To get our observation data for simulations, we run the forward problem and sample sensor data at every 0.05 seconds for 1.0 seconds. We sample at the following five locations: (0.5, 0.5) , (0.25, 0.25) , (0.25, 0.75) , (0.75, 0.25) , and (0.75, 0.75). When reconstructing, we assume that there is a subdomain Ωc ⊂ Ω where our initial condition and source terms are contained. We assume that the source term and initial condition can be represented as a linear combinations of basis functions defined on Ωc . For this particular model, we assume the subdomain is [0, 0.4] × [0, 0.4] and we have piecewise constant basis functions. Furthermore, we assume that the source term in our reconstruction is nonzero for the same time interval as S(x, t). Thus we assume the source basis functions are nonzero for only t ∈ [0, 0.05]. To reconstruct, we run the forward simulation for a 4 × 4 grid of piecewise constant basis functions on [0, 0.4] × [0, 0.4] for both the initial condition and the source term. We then reconstruct the coefficients for the initial condition and source term using the approach proposed earlier. The following plot shows a comparison between the original surface (in green) and the reconstructed surface (in red). The plots are for t = 0.1, 0.2, 0.4 and 0.6. We observe that the recovery at initial times is not very accurate. This is due to the fact that we have not

1008

C.C. Douglas et al.

Fig. 2. Comparison between reconstructed (red) solution and exact solution at t = 0.1 (upper left), t = 0.2 (upper right), t = 0.4 (lower left), and t = 0.6 (lower right)

collected sufficient sensor data. As the time progresses, the prediction results improve. We observe that at t = 0.6, we have nearly exact prediction of the contaminant transport. To account for the uncertainties associated with sensor measurements, we consider an update of initial condition and source terms, within a Bayesian framework. The posterior distribution is set up based on measurement errors and prior information. This posterior distribution is complicated and involves the solutions of partial differential equations. We developed an approach that combines least squares with a Bayesian approach, such as Metropolis-Hasting Markov chain Monte Carlo (MCMC) [14], that gives a high acceptance rate. In particular, we can prove that rigorous sampling can be achieved by sampling the sensor data from the known distribution, thus obtaining various realizations of the initial data. Our approach has similarities with the Ensemble Kalman Filter approach, which can also be adapted in our problem. We have performed numerical studies and these results will be reported elsewhere.

5

Conclusions

In the last year, we have made strides in creating our DDDAS. We have developed software that makes sending data from locations that go on and off the Internet and possibly change IP addresses rather easy to work with. This is a stand alone package that runs on any devices that support Java. It has also

Dynamically Identifying and Tracking Contaminants in Water Bodies

1009

been integrated into newly released version 3.0 of SCIRun and is in use by other groups, including surgeons while operating on patients. We have also developed software that simulates the behavior of the SSSI and are porting the relevant parts so that it can be loaded into the SSSI to get real sensor data. We have developed algorithms that allow us to achieve accurate predictions in the presence of errors/uncertainties in dynamic source terms as well as other external conditions. We have tested our methodology in both deterministic and stochastic environments and have presented some simplistic examples in this paper.

References 1. Douglas, C.C., Harris, J.C., Iskandarani, M., Johnson, C.R., Lodder, R.A., Parker, S.G., Cole, M.J., Ewing, R.E., Efendiev, Y., Lazarov, R., Qin, G.: Dynamic contaminant identification in water. In: Computational Science - ICCS 2006: 6th International Conference, Reading, UK, May 28-31, 2006, Proceedings, Part III, Heidelberg, Springer-Verlag (2006) 393–400 2. Lowell, A., Ho, K.S., Lodder, R.A.: Hyperspectral imaging of endolithic biofilms using a robotic probe. Contact in Context 1 (2002) 1–10 3. Dempsey, R.J., Davis, D.G., R. G. Buice, J., Lodder, R.A.: Biological and medical applications of near-infrared spectrometry. Appl. Spectrosc. 50 (1996) 18A–34A 4. Silva, H.E.B.D., Pasquini, C.: Dual-beam near-infrared Hadamard. Spectrophotometer Appl. Spectrosc. 55 (2001) 715–721 5. Johnson, C.R., Parker, S., Weinstein, D., Heffernan, S.: Component-based problem solving environments for large-scale scientific computing. Concur. Comput.: Practice and Experience 14 (2002) 1337–1349 6. SCIRun: A Scientific Computing Problem Solving Environment, Scientific Computing and Imaging Institute (SCI). http://software.sci.utah.edu/scirun.html (2007) 7. Li, W.: A dynamic data-driven application system (dddas) tool for dynamic reconfigurable point-to-point data communication. Master’s thesis, University of Kentucky Computer Science Department, Lexington, KY 8. Apple OS X 10.4 XGrid Features, Apple, inc. http://www.apple.com/acg/xgrid (2007) 9. Douglas, C.C., Beezley, J.D., Coen, J., Li, D., Li, W., Mandel, A.K., Mandel, J., Qin, G., Vodacek, A.: Demonstrating the validity of a wildfire DDDAS. In: Computational Science - ICCS 2006: 6th International Conference, Reading, UK, May 28-31, 2006, Proceedings, Part III, Heidelberg, Springer-Verlag (2006) 522–529 10. GeoTiff. http://www.remotesensing.org/geotiff/geotiff.html (2007) 11. Douglas, C.C., Efendiev, Y., Ewing, R.E., Ginting, V., Lazarov, R.: Dynamic data driven simulations in stochastic environments. Computing 77 (2006) 321–333 12. Douglas, C.C., Efendiev, Y., Ewing, R.E., Ginting, V., Lazarov, R., Cole, M.J., Jones, G.: Least squares approach for initial data recovery in dynamic data-driven applications simulations. Comp. Vis. in Science (2007) in press. 13. Douglas, C.C., Efendiev, Y., Ewing, R.E., Ginting, V., Lazarov, R., Cole, M.J., Jones, G.: Dynamic data-driven application simulations. interpolation and update. In: Environmental Security Air, Water and Soil Quality Modelling for Risk and Impact Assessment. NATO Securtity through Science Series C, New York, Springer-Verlag (2006) 14. Robert, C., Casella, G.: Monte Carlo Statistical Methods. Springer-Verlag, New York (1999)

Hessian-Based Model Reduction for Large-Scale Data Assimilation Problems Omar Bashir1 , Omar Ghattas2 , Judith Hill3 , Bart van Bloemen Waanders3 , and Karen Willcox1 1

Massachusetts Institute of Technology, Cambridge MA 02139, USA [email protected],[email protected] 2 The University of Texas at Austin, Austin TX 78712 [email protected] 3 Sandia National Laboratories, Albuquerque NM 87185 [email protected],[email protected]

Abstract. Assimilation of spatially- and temporally-distributed state observations into simulations of dynamical systems stemming from discretized PDEs leads to inverse problems with high-dimensional control spaces in the form of discretized initial conditions. Solution of such inverse problems in “real-time” is often intractable. This motivates the construction of reduced-order models that can be used as surrogates of the high-fidelity simulations during inverse solution. For the surrogates to be useful, they must be able to approximate the observable quantities over a wide range of initial conditions. Construction of the reduced models entails sampling the initial condition space to generate an appropriate training set, which is an intractable proposition for high dimensional initial condition spaces unless the problem structure can be exploited. Here, we present a method that extracts the dominant spectrum of the inputoutput map (i.e. the Hessian of the least squares optimization problem) at low cost, and uses the principal eigenvectors as sample points. We demonstrate the efficacy of the reduction methodology on a large-scale contaminant transport problem. Keywords: Model reduction; data assimilation; inverse problem; Hessian matrix; optimization.

1

Introduction

One important component of Dynamic Data Driven Application Systems (DDDAS) is the continuous assimilation of sensor data into an ongoing simulation. This inverse problem can be formulated as an optimal control problem, 

Partially supported by the National Science Foundation under DDDAS grants CNS0540372 and CNS-0540186, the Air Force Office of Scientific Research, and the Computer Science Research Institute at Sandia National Laboratories. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed-Martin Company, for the US Department of Energy under Contract DE-AC04-94AL85000.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1010–1017, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Hessian-Based Model Reduction for Large-Scale Data Assimilation Problems

1011

in which the controls are the initial conditions, the constraints are the state equations describing the dynamics of the system, and the objective is the difference between the state observations and those predicted by the state equations, measured in some appropriate norm. When the physical system being simulated is governed by partial differential equations in three spatial dimensions and time, the forward problem alone (i.e. solution of the PDEs for a given initial condition) may requires many hours of supercomputer time. The inverse problem, which requires repeated solution of the forward problem, may then be out of reach in situations where rapid assimilation of the data is required. In particular, when the simulation is used as a basis for forecasting or decision-making, a reduced model that can execute much more rapidly than the high-fidelity PDE simulation is then needed. A crucial requirement for the reduced model is that it be able to replicate the output quantities of interest (i.e. the observables) of the PDE simulation over a wide range of initial conditions, so that it may serve as a surrogate of the high fidelity PDE simulation during inversion. One popular method for generating a reduced model is through a projection basis (for example, by proper orthogonal decomposition in conjunction with the method of snapshots). To build such a reduced order model, one typically constructs a training set by sampling the space of (discretized) initial conditions. When this space is high-dimensional, the problem of adequately sampling it quickly becomes intractable. Fortunately, for many ill-posed inverse problems, many components of the initial condition space have minimal or no effect on the output observables. This is particularly true when the observations are sparse. In this case, it is likely that an effective reduced model can be generated with few sample points. The question is how to locate these sample points. Here, we consider the case of a linear forward problem, and propose that the sample points be associated with dominant eigenvectors of the Hessian matrix of the misfit function. This matrix maps inputs (initial conditions) to outputs (observables), and its dominant eigenvectors represent initial condition components that are most identifiable from observable data. Thus, one expects these eigenvectors to serve as good sample points for constructing the reduced model. In Section 2, we describe the model reduction framework we consider, and in Section 3 justify the choice of the dominant eigenvectors of the Hessian by relating it to solution of a certain greedy optimization problem to locate the best sample points. Section 4 illustrates the methodology via application to a data assimilation inverse problem involving transport of an atmospheric contaminant.

2

Reduced-Order Dynamical Systems

Consider the general linear initial-value problem x(k + 1) = Ax(k), y(k) = Cx(k), x(0) = x0 ,

k = 0, 1, . . . , T − 1,

(1)

k = 0, 1, . . . , T,

(2) (3)

1012

O. Bashir et al.

where x(k) ∈ IRN is the system state at time tk , the vector x0 contains the specified initial state, and we consider a time horizon from t = 0 to t = tT . The vector y(k) ∈ IRQ contains the Q system outputs at time tk . In general, we are interested in systems of the form (1)–(3) that result from spatial and temporal discretization of PDEs. In this case, the dimension of the system, N , is very large and the matrices A ∈ IRN ×N and C ∈ IRQ×N result from the chosen spatial and temporal discretization methods. A reduced-order model of (1)–(3) can be derived by assuming that the state x(k) is represented as a linear combination of n basis vectors, xˆ(k) = V xr (k),

(4)

where x ˆ(k) is the reduced model approximation of the state x(k) and n  N . The projection matrix V ∈ IRN ×n contains as columns the orthonormal basis vectors Vi , i.e., V = [V1 V2 · · · Vn ], and the reduced-order state xr (k) ∈ IRn contains the corresponding modal amplitudes for time tk . Using the representation (4) together with a Galerkin projection of the discrete-time system (1)–(3) onto the space spanned by the basis V yields the reduced-order model with state xr and output yr , xr (k + 1) = Ar xr (k), yr (k) = Cr xr (k),

k = 0, 1, . . . , T − 1, k = 0, 1, . . . , T,

xr (0) = V T x0 ,

(5) (6) (7)

where Ar = V T AV and Cr = CV . For convenience of notation, we write the discrete-time system (1)–(3) in matrix form as Ax = Fx0 ,

y = Cx,

(8)

 T T  where x = x(0)T x(1)T . . . x(T )T , y = y(0)T y(1)T . . . y(T )T , and the matrices A, F, and C are appropriately defined functions of A and C. Similarly, the reduced-order model (5)–(7) can be written in matrix form as Ar xr = Fr x0 ,

yr = Cr xr ,

(9)

where xr , yr , Ar , and Cr are defined analogously to x, y, A, and C but with T the appropriate reduced-order quantities, and Fr = [V 0 . . . 0] . In many cases, we are interested in rapid identification of initial conditions from sparse measurements of the states over a time horizon; we thus require a reduced-order model that will provide accurate outputs for any initial condition contained in some set X0 . Using the projection framework described above, the task therefore becomes one of choosing an appropriate basis V so that the error between full-order output y and the reduced-order output yr is small for all initial conditions of interest.

Hessian-Based Model Reduction for Large-Scale Data Assimilation Problems

3

1013

Hessian-Based Model Reduction

To determine the reduced model, we must identify a set of initial conditions to be sampled. At each selected initial condition, a forward simulation is performed to generate a set of states, commonly referred to as snapshots, from which the reduced basis is formed. The key question is then how to identify important initial conditions that should be sampled. Our approach is motivated by the greedy algorithm of [5], which proposed an adaptive approach to determine the parameter locations at which samples are drawn to form a reduced basis. The greedy algorithm adaptively selects these snapshots by finding the location in parameter–time space where the error between the full-order and reduced-order models is maximal, updating the basis with information gathered from this sample location, forming a new reduced model, and repeating the process. In the case of the initial-condition problem, the greedy approach amounts to sampling at the initial condition x∗0 ∈ X0 that maximizes the error between the full and reduced-order outputs. For this formulation, the only restriction that we place on the set X0 is that it contain vectors of unit length. This prevents unboundedness in the optimization problem, since otherwise the error in the reduced system could be made arbitrarily large. The key step in the greedy sampling approach is thus finding the worst-case initial condition x∗0 , which can be achieved by solving the optimization problem, x∗0 = arg max (y − yr )T (y − yr ) x0 ∈X0

where

(10)

Ax = Fx0 , y = Cx,

(11) (12)

Ar xr = Fr x0 , yr = Cr xr .

(13) (14)

Equations (10)-(14) define a large-scale optimization problem, which includes the full-scale dynamics as constraints. The linearity of the state equations can be exploited to eliminate the full-order and reduced-order states and yield an equivalent unconstrained optimization problem, x∗0 = arg max xT0 H e x0 ,

(15)

 T   H e = CA−1 F − Cr A−1 CA−1 F − Cr A−1 r Fr r Fr .

(16)

x0 ∈X0

where

It can be seen that (15) is a quadratic unconstrained optimization problem with Hessian matrix H e ∈ IRN ×N . From (16), it can be seen that H e is a symmetric positive semidefinite matrix. Since we are considering initial conditions of unit norm, the solution x∗0 maximizes the Rayleigh quotient; therefore, the solution of (15) is given by the eigenvector corresponding to the largest eigenvalue of H e .

1014

O. Bashir et al.

This eigenvector is the initial condition for which the error in reduced model output prediction is largest. Rather than constructing a reduced model at every greedy iteration, and determining the dominant eigenvector of the resulting error Hessian He , an efficient one-shot algorithm can be constructed by computing the dominant eigenmodes of the Hessian matrix  T   CA−1 F . H = CA−1 F (17) Here, H ∈ IRN ×N is the Hessian matrix of the full-scale system, and does not depend on the reduced-order model. As before, H is a symmetric positive semidefinite matrix. It can be shown that, under certain assumptions, the eigenvectors of H with largest eigenvalues approximately solve the sequence of problems defined by (10)–(14) [3]. These ideas motivate the following basis-construction algorithm for the initial condition problem. We use the dominant eigenvectors of the Hessian matrix H to identify the initial-condition vectors that have the most significant contributions to the outputs of interest. These vectors are in turn used to initialize the full-scale discrete-time system to generate a set of state snapshots that are used to form the reduced basis (using, for example, the proper orthogonal decomposition).

4

Application: Model Reduction for 3D Contaminant Transport in an Urban Canyon

We demonstrate our model reduction method by applying it to a 3D airborne contaminant transport problem for which a solution is needed in real time. Intentional or unintentional chemical, biological, and radiological (CBR) contamination events are important national security concerns. In particular, if contamination occurs in or near a populated area, predictive tools are needed to rapidly and accurately forecast the contaminant spread to provide decision support for emergency response efforts. Urban areas are geometrically complex and require detailed spatial discretization to resolve the relevant flow and transport, making prediction in real-time difficult. Reduced-order models can play an important role in facilitating real-time turn-around, in particular on laptops in the field. However, it is essential that these reduced models be faithful over a wide range of initial conditions, since in principle any initial condition can be realized. Once a suitable reduced-order model has been generated, it can serve as a surrogate for the full model within an inversion framework to identify the initial conditions given sensor data (the full-scale case is discussed in [1]). To illustrate the generation of a reduced-order model that is accurate for arbitrary high-dimensional initial conditions, we consider a three-dimensional urban canyon geometry occupying a (dimensionless) 15 × 15 × 15 domain. Figure 1 shows the domain and buildings, along with locations of six sensors, all

Hessian-Based Model Reduction for Large-Scale Data Assimilation Problems

1015

placed at a height of 1.5. Contaminant transport is modeled by the advectiondispersion equation, ∂w + v · ∇w − κ∇2 w ∂t w ∂w ∂n w

= 0

in Ω × (0, tf ),

(18)

= 0

on ΓD × (0, tf ),

(19)

= 0

on ΓN × (0, tf ),

(20)

= w0 in Ω for t = 0,

(21)

where w is the contaminant concentration, v is the velocity vector field, κ is the diffusivity, tf is the time horizon of interest, and w0 is the given initial condition. ΓD and ΓN are respectively the portions of the domain boundary over which Dirichlet and Neumann boundary conditions are applied. Eq. (18) is discretized in space using an SUPG finite element method with linear tetrahedra, while the implicit Crank-Nicolson method is used to discretize in time. Homogeneous Dirichlet boundary conditions are specified for the concentration on the inflow boundary, x¯ = 0, and the ground, z¯ = 0. Homogeneous Neumann boundary conditions are specified for the concentration on all other boundaries. The velocity field, v, required in (18) is computed by solving the steady laminar incompressible Navier-Stokes equations, also discretized with SUPGstabilized linear tetrahedra. No-slip conditions, i.e. v = 0, are imposed on the building faces and the ground z¯ = 0. The velocity at the inflow boundary x ¯=0 is taken as known and specified in the normal direction as 0.5  z , vx (z) = vmax zmax with vmax = 3.0 and zmax = 15, and zero tangentially. On the outflow boundary x ¯ = 15, a traction-free (Neumann) condition is applied. On all other boundaries (¯ y = 0, y¯ = 15, z¯ = 15), we impose a combination of no flow normal to the boundary and traction-free tangent to the boundary. The spatial mesh for the full-scale system contains 68,921 nodes and 64,000 tetrahedral elements. For both basis creation and testing, a final non-dimensional time tf = 20.0 is used, and discretized over 200 timesteps. The Peclet number based on the maximum inflow velocity and domain dimension is Pe=900. The PETSc library [2] is used for all implementation. Figure 2 illustrates a sample forward solution. The test initial condition used in this simulation, meant to represent the system state just after a contaminant release event, was constructed using a Gaussian function with a peak magnitude of 100 centered at a height of 1.5. For comparison with the full system, a reduced model was constructed based on the dominant Hessian eigenvector algorithm discussed in the previous section, with p = 31 eigenvector initial conditions and n = 137 reduced basis vectors (these numbers were determined based on eigenvalue decay rates). Eigenvectors were computed using the Arnoldi eigensolver within the SLEPc package [4], which is built on PETSc. Figure 3 shows a comparison of the full and reduced time history of concentration at each output

1016

O. Bashir et al.

Fig. 1. Building geometry and locations of outputs for the 3-D urban canyon problem

Fig. 2. Transport of contaminant concentration through urban canyon at six instants in time, beginning with the initial condition shown in upper left

location. There is no discernible difference between the two. The figure demonstrates that a reduced system of size n = 137, which is solved in a matter of seconds on a desktop, can accurately replicate the outputs of the full-scale system of size N = 65, 600. We emphasize that the (offline) construction of the

Hessian-Based Model Reduction for Large-Scale Data Assimilation Problems 100

100

80

80

80

60

60

60

Full Reduced

y3

y2

y1

100

40

40

40

20

20

20

0

0 0

50

100

150

200

0 0

50

100

150

200

0

100

80

80

80

60

60

60

y5 40

40

40

20

20

20

0 0

50

100

150

200

50

100

150

200

50

100

150

200

y6

100

y4

100

0

1017

0 0

50

100

150

200

0

Time

Fig. 3. Full (65,600 states) and reduced (137 states) model contaminant predictions at the six sensor locations for urban canyon example

reduced-order model targets only the specified outputs, and otherwise has no knowledge of the initial conditions used in the test of Figure 3.

References 1. V. Ak¸celik, G. Biros, A. Draganescu, O. Ghattas, J. Hill, and B. van Bloemen Waanders. Dynamic data-driven inversion for terascale simulations: Real-time identification of airborne contaminants. In Proceedings of SC2005, Seattle, WA, 2005. 2. S. Balay, K. Buschelman, V. Eijkhout, W. Gropp, D. Kaushik, M. Knepley, L. McInnes, B. Smith, and H. Zhang. PETSc users manual. Technical Report ANL-95/11 - Revision 2.1.5, Argonne National Laboratory, 2004. 3. O. Bashir. Hessian-based model reduction with applications to initial condition inverse problems. Master’s thesis, MIT, 2007. 4. V. Hernandez, J. Roman, and V. Vidal. SLEPc: A scalable and flexible toolkit for the solution of eigenvalue problems. ACM Transactions on Mathematical Software, 31(3):351–362, sep 2005. 5. K. Veroy, C. Prud’homme, D. Rovas, and A. Patera. A posteriori error bounds for reduced-basis approximation of parametrized noncoercive and nonlinear elliptic partial differential equations. AIAA Paper 2003-3847, Proceedings of the 16th AIAA Computational Fluid Dynamics Conference, Orlando, FL, 2003.

Localized Ensemble Kalman Dynamic Data Assimilation for Atmospheric Chemistry Adrian Sandu1 , Emil M. Constantinescu1 , Gregory R. Carmichael2 , aescu4 Tianfeng Chai2 , John H. Seinfeld3 , and Dacian D˘ Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061. {asandu, emconsta}@cs.vt.edu 2 Center for Global and Regional Environmental Research, The University of Iowa, Iowa City, 52242-1297. {gcarmich,tchai}@cgrer.uiowa.edu 3 Department of Chemical Engineering, California Institute of Technology, Pasadena, CA 91125. [email protected] 4 Department of Mathematics and Statistics, Portland State University. [email protected] 1

Abstract. The task of providing an optimal analysis of the state of the atmosphere requires the development of dynamic data-driven systems (DDDAS) that efficiently integrate the observational data and the models. Data assimilation, the dynamic incorporation of additional data into an executing application, is an essential DDDAS concept with wide applicability. In this paper we discuss practical aspects of nonlinear ensemble Kalman data assimilation applied to atmospheric chemical transport models. We highlight the challenges encountered in this approach such as filter divergence and spurious corrections, and propose solutions to overcome them, such as background covariance inflation and filter localization. The predictability is further improved by including model parameters in the assimilation process. Results for a large scale simulation of air pollution in North-East United States illustrate the potential of nonlinear ensemble techniques to assimilate chemical observations.

1

Introduction

Our ability to anticipate and manage changes in atmospheric pollutant concentrations relies on an accurate representation of the chemical state of the atmosphere. As our fundamental understanding of atmospheric chemistry advances, novel data assimilation tools are needed to integrate observational data and models together to provide the best estimate of the evolving chemical state of the atmosphere. The ability to dynamically incorporate additional data into an executing application is a fundamental DDDAS concept (http://www.cise. nsf.gov/dddas.) We refer to this process as data assimilation. Data assimilation has proved vital for meteorological forecasting. 

This work was supported by the National Science Foundation through the award NSF ITR AP&IM 0205198 managed by Dr. Frederica Darema.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1018–1025, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Localized Ensemble Kalman Dynamic Data Assimilation

1019

In this paper we focus on the particular challenges that arise in the application of nonlinear ensemble filter data assimilation to atmospheric chemical transport models (CTMs). Atmospheric CTMs solve the mass-balance equations for concentrations of trace species to determine the fate of pollutants in the atmosphere [16]. The CTM operator, M, will be denoted compactly as   (1) ci = Mti−1 →ti ci−1 , ui−1 , cin i−1 , Qi−1 , where c represents the modeled species concentration, cin the inflow numerical boundary conditions, u the wind fields, Q the surface emission rates, and the subscript denotes the time index. In our numerical experiments, we use the Sulfur Transport Eulerian Model (STEM) [16], a state-of-the-art atmospheric CTM. Kalman filters [12] provide a stochastic approach to the data assimilation problem. The filtering theory is described in Jazwinski [10] and the applications to atmospheric modeling in [13]. The computational burden associated with the filtering process has prevented the implementation of the full Kalman filter for large-scale models. Ensemble Kalman filters (EnKF) [2,5] may be used to facilitate the practical implementation as shown by van Loon et al. [18]. There are two major difficulties that arise in EnKF data assimilation applied to CTMs: (1) CTMs have stiff components [15] that cause the filter to diverge [7] due to the lack of ensemble spread and (2) the ensemble size is typically small in order to be computationally tractable and this leads to filter spurious corrections due to sampling errors. Kalman filter data assimilation has been discussed for DDDAS in another context by Jun and Bernstein [11]. This paper addresses the following issues: (1) Background covariance inflation is investigated in order to avoid filter divergence, (2) localization is used to prevent spurious filter corrections caused by small ensembles, and (3) parameters are assimilated together with the model states in order to reduce the model errors and improve the forecast. The paper is organized as follows. Section 2 presents the ensemble Kalman data assimilation technique, Section 3 illustrates the use of the tools in a data assimilation test, and Section 4 summarizes our results.

2

Ensemble Kalman Filter Data Assimilation

Consider a nonlinear model ci = Mt0 →ti (c0 ) that advances the state from the initial time t0 to future times ti (i ≥ 1). The model state ci at ti (i ≥ 0) is an approximation of “true” state of the system cti at ti (more exactly cti is the system state projected onto the model space space). Observations yi are available at times ti and are corrupted by measurement and representativeness   errors εi (assumed Gaussian with mean zero and covariance Ri ), yi = Hi cti + εi . Here Hi is an operator that maps the model state to observations. The data assimilation problem is to find an optimal estimate of the state using both the information from the model (ci ) and from the observations (yi ). The (ensemble) Kalman filter estimates the true state ct using the information from the current best estimate cf (the “forecast” or the background state) and the observations y. The optimal estimate ca (the “analysis” state) is obtained as

1020

A. Sandu et al.

a linear combination of the forecast and observations that minimize the variance of the analysis (P a )  −1     ca = cf + P f H T HP f H T + R y − H(cf ) = cf + K y − H(cf ) . (2) The forecast covariance P f is estimated from an ensemble of runs (which produces an ensemble of E model states cf (e), e = 1, · · · , E). The analysis formula (2) is applied to each member to obtain an analyzed ensemble. The model advances the solution from ti−1 to ti , then the filter formula is used to incorporate the observations at ti . The filter can be described as      cfi (e) = M cai−1 (e) , cai (e) = cfi (e) + Ki yi − Hi cfi (e) . (3) The results presented in this paper are obtained with the practical EnKF implementation discussed by Evensen [5]. 2.1

The Localization of EnKF (LEnKF)

The practical Kalman filter implementation employs a small ensemble of Monte Carlo simulations in order to approximate the background covariance (P f ). In its initial formulation, EnKF may suffer from spurious correlations caused by sub-sampling errors in the background covariance estimates. This allows for observations to incorrectly impact remote model states. The filter localization introduces a restriction on the correction magnitude based on its remoteness. One way to impose localization in EnKF is to apply a decorrelation function ρ, that decreases with distance, to the background covariance. Following [8], the EnKF relation (2) with some simplifying assumptions becomes     −1  yi − Hi (cfi ) , (4) cai = cfi + ρ(Dc ) ◦ Pif HiT ρ(Dy ) ◦ Hi Pif HiT + Ri where D{c,y} are distance matrices with positive elements (di,j ≥ 0), and 0 ≤ ρ(di,j ) ≤ 1, ρ(0) = 1, ∀i, j. The decorrelation function ρ is applied to the distance matrix and produces a decorrelation matrix (decreasing with the distance). The operation ‘◦’ denotes the Schur product that applies elementwise ρ(D) to the projected covariance matrices P f H T and H P f H T , respectively. Here, Dy is calculated as the distance among the observation sites, and Dc contains the distance from each state variable to each observation site. We considered a Gaussian distribution for the decorrelation function, ρ. Since our model generally has an anisotropic horizontal–vertical flow, we consider the two correlation components (and factors, δ) separately:      2 2 (5) ρ Dh , Dv = exp − Dh /δ h − (Dv /δ v ) , where Dh , Dv , δ h , δ v are the horizontal and vertical components. The horizontal correlation-distance relationship is determined through the NMC method [14]. The horizontal NMC determined correlations were fitted with a Gaussian distribution, δ h = 270 km. The vertical correlation was chosen as δ v = 5 grid points.

Localized Ensemble Kalman Dynamic Data Assimilation

2.2

1021

Preventing Filter Divergence

The “textbook application” of EnKF [5] may lead to filter divergence [7]: EnKF shows a decreasing ability to correct ensemble states toward the observations. This is due to an underestimation of the model error covariance magnitude during the integration. The filter becomes “too confident” in the model and “ignores” the observations in the analysis process. The solution is to increase the covariance of the ensemble and therefore decrease the filter’s confidence in the model. The following are several ways to “inflate” the ensemble covariance. The first method is the additive inflation [4], where the model errors are simulated by adding uncorrelated noise (denoted by η) to the model (η− ) or analysis (η+ ) results. This increases the diagonal entries of the ensemble covariance. Since the correlation of the model errors is to a large extent  unknown, white  noise is typically chosen. With the notation (3), cfi (e) = M cai−1 (e) + η− (e) + η+ (e). The second method is the multiplicative inflation [1], where each member’s deviation from the ensemble mean is multiplied by a constant (γ > 1). This increases each entry of the ensemble covariance by that constant squared (γ 2 ). {f /a} The ensemble can be inflated before (γ{−} ) or after (γ{+} ) filtering: ci (e) ←  {f /a} + γ{−/+} , where · denotes the ensemble average. ci A third possibility for covariance inflation is through perturbations applied to key model parameters, and we refer to it as model-specific inflation. This approach focuses on sources of uncertainty that are specific to each model (for instance in CTMs: boundary conditions, emissions, and meteorological fields). With the notation (3) and considering p as a set of model parameters, the model  specific inflation can be written as cfi (e) = M cai−1 (e), αi−1 (e) pi−1 , where α(e) are random perturbation factors of the model parameters. 2.3

Inflation Localization

The traditional approach to covariance inflation increases the spread of the ensemble equally throughout the computational domain. In the LEnKF framework, the corrections are restricted to a region that is rich in observations. These states are corrected and their variance is reduced, while the remote states (i.e., the states that are relatively far from the observations’ locations) maintain their initial variation which is potentially reduced only by the model evolution. The spread of the ensemble at the remote states may be increased to unreasonably large values through successive inflation steps. And thus, the covariance inflation needs to be restricted in order to avoid the over-inflation of the remote states. A sensible inflation restriction can be based on the localization operator, ρ(D), which is applied in the same way as for the covariance localization. The localized multiplicative inflation factor, γ , is given by γ (i, j, k) = max {ρ (Dc (i, j, k))} (γ − 1) + 1 ,

(6)

where γ is the (non-localized) multiplicative inflation factor and i, j, k refer to the spatial coordinates. In this way, the localized inflation increases the ensemble spread only in the information-rich regions where filter divergence can occur.

1022

3

A. Sandu et al.

Numerical Results

° Latitude N

The test case is a real-life simulation of air pollution in North–Eastern U.S. in July 2004 as shown in Figure 1.a (the dash-dotted line delimits the domain). The observations used for data assimilation are the ground-level ozone (O3 ) measurements taken during the ICARTT [9,17] campaign in 2004 (which also includes the initial concentrations, meteorological fields, boundary values, and emission rates). Figure 1.a shows the location of the ground stations (340 in total) that measured ozone concentrations and an ozonesonde (not used in the assimilation process). The computational domain covers 1500 × 1320 × 20 Km with a horizontal resolution of 60 × 60 Km and a variable vertical resolution. The simulations are started at 0 GMT July 20th with a four hour initialization 48 step ([-4,0] hours). The “best guess” of the 46 state of the atmosphere at 0 GMT July 20th 44 S is used to initialize the deterministic so42 lution. The ensemble members are formed 40 by adding a set of unbiased perturbations 38 to the best guess, and then evolving each 36 member to 4 GMT July 20th . The pertur34 −85 −80 −75 −70 −65 bation is formed according to an AR model ° Longitude W [3] making it flow dependent. The 24 hours assimilation window starts at 4 GMT July Fig. 1. Ground measuring stations in 20th (denoted by [1,24] hours). Observa- support of the ICARTT campaign tions are available at each integer hour in (340 in total) and the ozonesonde (S) this window, i.e., at 1, 2, . . ., 24 hours (Fig- launch location ure 1.a). EnKF adjusts the concentration fields of 66 “control” chemical species in each grid point of the domain every hour using (2). The ensemble size was chosen to be 50 members (a typical size in NWP). A 24 hour forecast window is also considered to start at 4 GMT July 21st (denoted by [24,48] hours). The performance of each data assimilation experiment is measured by the R2 correlation factor (correlation2 ) between the observation and the model solution. The R2 correlation results between the observations and model values for all the numerical experiments are shown in Table 1. The deterministic (best guess) solution yields an R2 of 0.24 in the analysis and 0.28 in the forecast windows. In Table 1 we also show the results for a 4D-Var experiment. Figure 2.a shows the O3 concentration measured at a Washington DC station and predicted by the EnKF and LEnKF with model-specific inflation. Figure 2.b shows the ozone concentration profile measured by the ozonesonde for the EnKF and LEnKF with additive inflation. Two effects are clear for the “textbook” EnKF. The filter diverges after about 12 hours (2.a), and spurious corrections are made at higher altitudes (2.b), as the distance from the observation (ground) sites increases. The vertical profile in Figure 2.b shows great improvement in the analyzed solution of LEnKF. The results in Table 1 confirm the benefits of localization by dramatically improving the analysis and forecast fit.

Localized Ensemble Kalman Dynamic Data Assimilation

1023

Table 1. The R2 measure of model-observations match in the assimilation and forecast windows for EnKF, 4D-Var, and LEnKF. (Multiplicative inflation: γ− ≤ 4, γ+ ≤ 4; Model-specific inflation: 10% emissions, 10% boundaries, 3% wind). Method & Details

R2 R2 analysis forecast Deterministic solution, no assimilation 0.24 0.28 EnKF, “textbook application” 0.38 0.30 4D-Var - 50 iterations 0.52 0.29 LEnKF, model-specific inflation 0.88 0.32 LEnKF, multiplicative inflation 0.82 0.32 LEnKF, additive inflation 0.92 0.31 LEnKF with parameter assimilation, and 0.89 0.41 multiplicative localized inflation

Observations Deterministic EnKF LEnKF

O3 [ppbv]

80

20 Height [grid points]

100

60 40 20 0 −4 1

24 Time [Hours] 60

80

46

46

44

44

42 40

150

38 36

(c) EnKF “Textbook application”

60

80

100 [ppbv]

40

36 −65

40

42

38

−75 −70 ° Longitude W

100 O3 [ppbv]

20 48

−80

50

(b) Ozonesonde concentration profile 100 [ppbv]

48

34 −85

5 0

° Latitude N

° Latitude N

40

Observations Deterministic EnKF LEnKF

10

48

(a) Ozone concentration 20

15

34 −85

−80

−75 −70 ° Longitude W

−65

(d) LEnKF Multiplicative inflation

Fig. 2. Ozone concentration (a) measured at a Washington DC station (ICARTT ID: 510590030) and predicted by EnKF (“textbook”) and LEnKF with model-specific inflation, and (b) measured by the ozonesonde for EnKF and LEnKF. Ground level ozone concentration field (c,d) at 14 EDT in the forecast window measured by the ICARTT stations (shown in color coded filled circles) and predicted EnKF and LEnKF.

1024

A. Sandu et al.

3.1

Joint State and Parameter Assimilation

In regional CTMs the influence of the initial conditions is rapidly diminishing with time, and the concentration fields are “driven” by emissions and by lateral boundary conditions. Since both of them are generally poorly known, it is of considerable interest to improve their values using information from observations. In this setting we have to solve a joint state-parameter assimilation problem [6]. The emission rates and lateral boundary conditions are multiplied by specific correction coefficients, α. These correction coefficients are appended to the model state. The LEnKF data assimilation is then carried out with the augmented model state. With the notation (1), LEnKF is applied to 

{1,2}

cfi αi

T

T    {2} in = Mti−1 →ti cai−1 , ui−1 , α{1} . c , α Q α i−1 i−1 i−1 i−1 i−1

For α, we consider a different correction for each species and each gridpoint. The initial ensemble of correction factors is an independent set of normal variables and the localization is done in the same way as in the state-only case. The R2 after LEnKF data assimilation for combined state and emission correction coefficients (presented in Table 1) show improvements in both the forecast and the analysis windows. Figures 2.(c,d) show the ground level ozone field concentration at 14 EDT in the forecast window measured by the ICARTT stations, EnKF with state corrections and LEnKF with joint state-parameter corrections. In the LEnKF case under consideration the addition of the correction parameters to the assimilation process improves the assimilated solution (especially on the inflow boundary (West)).

4

Conclusions

This paper discusses some of the challenges associated with the application of nonlinear ensemble filtering data assimilation to atmospheric CTMs. Three aspects are analyzed in this study: filter divergence - CTMs tend to dampen perturbations; spurious corrections - small ensemble size cause wrong increments, and model parametrization errors - without correcting model errors in the analysis, correcting the state only does not help in improving the forecast accuracy. Experiments showed that the filter diverges quickly. The influence of the initial conditions fades in time as the fields are largely determined by emissions and by lateral boundary conditions. Consequently, the initial spread of the ensemble is diminished in time. Moreover, stiff systems (like chemistry) are stable - small perturbations are damped out quickly in time. In order to prevent filter divergence, the spread of the ensemble needs to be explicitly increased. We investigated three approaches to ensemble covariance inflation among which model-specific inflation is the most intuitive. The “localization” of EnKF is needed in order to avoid the spurious corrections noticed in the “textbook” application. The correlation distances are approximated using the NMC method. Furthermore, covariance localization prevents over-inflation of the states that are

Localized Ensemble Kalman Dynamic Data Assimilation

1025

remote from observation. LEnKF increased both the accuracy of the analysis and forecast at the observation sites and at distant locations (from the observations). Since the solution of a regional CTM is largely influenced by uncertain lateral boundary conditions and by uncertain emissions it is of great importance to adjust these parameters through data assimilation. The assimilation of emissions and boundary conditions visibly improves the quality of the analysis.

References 1. J.L. Anderson. An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev, 129:2884–2903, 2001. 2. G. Burgers, P.J. van Leeuwen, and G. Evensen. Analysis scheme in the ensemble Kalman Filter . Mon. Wea. Rev, 126:1719–1724, 1998. 3. E.M. Constantinescu, T. Chai, A. Sandu, and G.R. Carmichael. Autoregressive models of background errors for chemical data assimilation. To appear in J. Geophys. Res., 2006. 4. M. Corazza, E. Kalnay, and D. Patil. Use of the breeding technique to estimate the shape of the analysis “errors of the day”. Nonl. Pr. Geophys., 10:233–243, 2002. 5. G. Evensen. The ensemble Kalman filter: theoretical formulation and practical implementation. Ocean Dynamics, 53, 2003. 6. G. Evensen. The combined parameter and state estimation problem. Submitted to Ocean Dynamics, 2005. 7. P.L. Houtekamer and H.L. Mitchell. Data assimilation using an ensemble Kalman filter technique . Mon. Wea. Rev, 126:796–811, 1998. 8. P.L. Houtekamer and H.L. Mitchell. A sequential ensemble Kalman filter for atmospheric data assimilation . Mon. Wea. Rev, 129:123–137, 2001. 9. ICARTT. ICARTT home page:http://www.al.noaa.gov/ICARTT . 10. A.H. Jazwinski. Stochastic Processes and Filtering Theory. Academic Press, 1970. 11. B.-E. Jun and D.S. Bernstein Least-correlation estimates for errors-in-variables models . Int’l J. Adaptive Control and Signal Processing, 20(7):337–351, 2006. 12. R.E. Kalman. A new approach to linear filtering and prediction problems. Trans. ASME, Ser. D: J. Basic Eng., 83:95–108, 1960. 13. R. Menard, S.E. Cohn, L.-P. Chang, and P.M. Lyster. Stratospheric assimilation of chemical tracer observations using a Kalman filter. Part I: Formulation. Mon. Wea. Rev, 128:2654–2671, 2000. 14. D.F. Parrish and J.C. Derber. The national meteorological center’s spectral statistical-interpolation analysis system. Mon. Wea. Rev, (120):1747–1763, 1992. 15. A. Sandu, J.G. Blom, E. Spee, J.G. Verwer, F.A. Potra, and G.R. Carmichael. Benchmarking stiff ODE solvers for atmospheric chemistry equations II - Rosenbrock solvers. Atm. Env., 31:3,459–3,472, 1997. 16. A. Sandu, D. Daescu, G.R. Carmichael, and T. Chai. Adjoint sensitivity analysis of regional air quality models. J. of Comp. Phy., 204:222–252, 2005. 17. Y. Tang et al. The influence of lateral and top boundary conditions on regional air quality prediction: a multi-scale study coupling regional and global chemical transport models. Submitted to J. Geophys. Res., 2006. 18. M. van Loon, P.J.H. Builtjes, and A.J. Segers. Data assimilation of ozone in the atmospheric transport chemistry model LOTOS. Env. Model. and Soft., 15:603– 609, 2000.

Data Assimilation in Multiscale Chemical Transport Models Lin Zhang and Adrian Sandu Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, US {lin83, asandu}@vt.edu

Abstract. In this paper we discuss variational data assimilation using the STEM atmospheric Chemical Transport Model. STEM is a multiscale model and can perform air quality simulations and predictions over spatial and temporal scales of different orders of magnitude. To improve the accuracy of model predictions we construct a dynamic data driven application system (DDDAS) by integrating data assimilation techniques with STEM. We illustrate the improvements in STEM field predictions before and after data assimilation. We also compare three popular optimization methods for data assimilation and conclude that LBFGS method is the best for our model because it requires fewer model runs to recover optimal initial conditions. Keywords: STEM, Chemical Transport Model, Data Assimilation.

1 Introduction The development of modern industry has brought about much pollutant to the world, which has deep influence to people’s life. To analyze and control the air quality, large comprehensive models are indispensable. STEM(Sulfur Transport Eulerian Model) [1] is a chemical transport model used to simulate the air quality evolutions and make predictions. A large variety of species in the air are changing at different time and space magnitude, which requires STEM to be a multiscale system to fully simulate these changes in the atmosphere. The dynamic incorporation of additional data into an executing application is an essential DDDAS concept with wide applicability (http://www.cise.nsf.gov/dddas). In this paper we focus on data assimilation, the process in which measurements are used to constrain model predictions; the information from measurements can be used to obtain better initial conditions, better boundary conditions, enhanced emission estimates, etc. Data assimilation is essential in weather/climate analysis and forecast activities and is employed here in the context of atmospheric chemistry and transport models. Kalman filter technique [2] gives a stochastic approach to the data assimilation problem, while variational methods (3D-Var, 4D-Var) provide an optimal control approach. Early applications of the four-dimensional variational (4D-Var) data assimilation were presented by Fisher and Lary [3] for a stratospheric photochemical box model with trajectories. Khattatov et al. [4] implemented both the 4D-Var and a Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1026–1033, 2007. © Springer-Verlag Berlin Heidelberg 2007

Data Assimilation in Multiscale Chemical Transport Models

1027

Kalman filter method using a similar model. In the past few years variational methods have been successfully used in data assimilation for comprehensive threedimensional atmospheric chemistry models (Elbern and Schmidt [5], Errera and Fonteyn [6]). Wang et al. [7] provide a review work of data assimilation applications to atmospheric chemistry. As our STEM model is time dependent in 3D space, 4DVar is the appropriate approach to data assimilation. The paper is organized as follows. Second section introduces the STEM Chemical Transport Model. Theory and importance of 4D-Var data assimilation are presented in the third section, followed by some results of data assimilation using the STEM in the forth section. In section five, we briefly describe L-BFGS(Limited-memory Broyden Fletcher Goldfarb Shanno method), Nonlinear Conjugate Gradients and Hessian Free Newton methods and assess their performances in the STEM model. Summary and conclusions are given in section six.

2 The STEM Chemical Transport Model The STEM is a regional atmospheric Chemical Transport Model(CTM). Taking emissions, meteorology(wind, temperature, humidity, precipitation etc.) and a set of chemical initial and boundary conditions as input, it simulates the pollutants behavior in the selected domain. In the following we give the mathematical description of the Chemical Transport Model [8]. 2.1 Chemical Transport Model in STEM We denote u the wind filed vector, K the turbulent diffusivity tensor, ρ the air density in molecues/ cm3 . Let V i dep be the deposition velocity of species i , Qi the rate of surface emissions, and Ei the rate of elevated emissions for this species. The rate of chemical transformations f i depends on absolute concentration values; the rate at which mole-fraction concentrations change is then f i ( ρc) / ρ . Consider a domain Ω which covers a region of the atmosphere with the boundary ∂Ω . At each time moment the boundary of the domain is partitioned into ∂Ω = Γ IN ∪ Γ OUT ∪ Γ GR , where Γ GR is the ground level portion of the boundary; Γ IN is the inflow part of lateral or top boundary and ΓOUT the outflow part. The evolution of concentrations in time is described by the material balance equations ∂ci 1 1 = −u ⋅ ∇ci + ∇ ⋅ ( ρK∇ci ) + f i ( ρc) + Ei , ∂t ρ ρ

ci (t 0 , x) = ci0 ( x), ci (t , x) = ciIN (t , x) K

∂ci =0 ∂n

for

for

t0 ≤ t ≤ T

(1) (2)

x ∈ Γ IN ,

x ∈ Γ OUT ,

(3) (4)

1028

L. Zhang and A. Sandu

K

∂ci = Vi depci − Qi ∂n

x ∈ ΓGR ,

for

for

1 ≤ i ≤ s.

all

(5)

The system (1)-(5) builds up the forward (direct) model. To simplify the presentation, in this paper we consider the initial state c0 as parameters of the model. It is known that this does not restrict the generality of the formulation. An infinitesimal perturbation δc 0 in the parameters will result in perturbations δci (t ) of the concentration fields. These perturbations are solutions of the tangent linear model. In the direct sensitivity analysis approach we can solve the model (1)(5) together with the tangent linear model forward in time. 2.2 Continuous Adjoint Model in STEM Consider a scalar response functional defined in terms of the model solution c(t ) T

J (c 0 ) = ∫ 0 ∫ g (c(t , x )) dxdt t

(6)

Ω

The response depends implicitly on the parameters c 0 via the dependence of c(t ) on c 0 . The continuous adjoint model is defined as the adjoint of the tangent linear model. By imposing the Lagrange identity and after a careful integration by parts one arrives at the following equations that govern the evolution of the adjoint variables: λ ∂λi = −∇ ⋅ (uλi ) − ∇ ⋅ ( ρK∇ i ) − ( F T ( ρc )λ )i − φi ∂t ρ

T ≥ t ≥ t0

λi (T , x) = λiF ( x) λi (t , x ) = 0 λi u + ρK ρK

(8)

x ∈ Γ IN

(9)

for x ∈ ΓOUT

(10)

for

∂ (λi / ρ ) ∂n

(7)

∂ (λi / ρ ) GR = Vi dep λi for x ∈ Γ , for all 1 ≤ i ≤ s ∂n

(11)

Where φi (t , x) =

∂g (c1 ,…, cn ) (t , x), ∂ci

λiF ( x ) = 0 ,

(12)

and λi (t , x) are the adjoint variables associated with the concentrations ci (t , x) , 1 ≤ i ≤ s . In the above F = ∂f / ∂c is the Jacobian of the chemical rate function f . To obtain the ground boundary condition we use the fact that u ⋅ n = 0 at ground level. We refer to (7)-(11) as the (continuous) adjoint system of the tangent linear model. In the context of optimal control where the minimization of the functional (6) is required, the adjoint variables can be interpreted as Lagrange multipliers. The adjoint system (7)-(11) depends on the states of the forward model (i.e. on the concentration fields through the nonlinear chemical term F ( ρc) and possibly through the forcing term φ for nonlinear functionals. Note that the adjoint initial condition is

Data Assimilation in Multiscale Chemical Transport Models

1029

posed at the final time T such that the forward model must be first solved forward in time, the state c(t , x) saved for all t, then the adjoint model could be integrated backwards in time from T down to t 0 . 2.3 Properties of STEM

The model uses the SAPRC-99(Statewide Air Pollution Research Center's chemical mechanism) [9] and KPP(the Kinetic PreProcessor) [10], to determine chemical reactions. KPP implements integration of chemical mechanism using implicit Rosenbrock and Runga-Kutta method in both forward and adjoint model. The STEM model runs multiscale simulations in both time and space. From the time aspect of view, it ranges from 10-6 seconds for fast chemical reactions to days’ simulation measured in hours. Fast chemical reactions are referred to atomic level reactions, such as O, OH radical activities, while long term simulation usually accounts for atmospheric species transportation in large range. When it comes to spatial scales, STEM is able to simulate in range measured in meters, such as emissions of pollutants like NO, NO2, CO2, Volatile Organic Compounds (VOC) and particles from vehicles. Besides, continental scales as large as thousands of kilometers are usually used in STEM for air quality simulation. So far STEM has been employed for simulations over U.S., Asia and Europe. For this paper, we use STEM to run on a horizontal resolution of 60Km by 60Km, with 21 vertical levels defined in the Regional Atmospheric Modeling System’s sigma-z coordinate system. The domain covers northeast of U.S, ranging from 68°W to 85°W and from 36°N to 48°N. The simulations are carried out from 8am EDT to 8pm EDT on July 20, 2004, and dynamical time step is 15 minutes.

3 4D-Var Data Assimilation in STEM Data assimilation is the process by which measurements are used to constrain the model predictions; the information from measurements can be used to obtain better initial conditions, better boundary conditions, enhanced emission estimates, etc. Data assimilation combines information from three different sources: the physical and chemical laws of evolution (encapsulated in the model), the reality (as captured by the observations), and the current best estimate of the distribution of tracers in the atmosphere. In this paper we focus on obtaining optimized initial conditions which are essential in forward model integration. 4D-Var data assimilation can be used to STEM and is expected to improve air quality forecasting. In practice, directly measuring the parameters of the atmospheric conditions in large range is difficult because of sampling, technical and resource requirements. And due to the complexity of the chemical transport model, the number of possible state variables is extremely large. However, data acquisition for field and parameter estimates via data assimilation is feasible, even though enough observations are still need to fulfill data assimilation. Figure 1 shows the employment of data assimilation in multiscale systems. The atmospheric chemical and transport processes take place in a variable range of time and space. The observations for data assimilation can come from local stations, planes, and satellites, measured from second to weeks in time and from nm to km in space. Data assimilation helps models to improve environmental policy decisions.

1030

L. Zhang and A. Sandu

Fig. 1. Atmospheric Chemical and Transport Processes take place at multiple temporal and spatial scales

We applied 4D-Var data assimilation to obtain the optimal initial conditions using STEM by minimizing a cost function that measures the misfit between model predictions and observations, as well as the deviation of the solution from the background state. The cost function is formulated as

( ) (c

min J c 0 =

1 2

0

− cB

)

T

(

) ∑ (H

B −1 c 0 − c B +

N

1 2

k =0

k

k c k − c obs

)

T

(

k Rk−1 H k c k − c obs

)

(13)

and our goal is to minimize the cost function J . In the above formula, c B represents the 'a priori' estimate (background) of the initial values and B is associated covariance k matrix of the estimated background error. H K is an observation operator and cobs is the real observations depending on time k . The covariance matrix Rk−1 accounts for observations and representativeness errors. The 4D-Var data assimilation is implemented by an iterative optimization procedure: each iteration requires STEM to run a forward integration to obtain the value of the cost function and an adjoint model to evaluate the gradient. Since model states are as high as 106 in our air quality simulation problem, it is prohibitive to evaluate Hessian of the cost function. Therefore, we choose three optimization methods that only require the values and the gradients of the cost function.

4 Results for Data Assimilation We performed data assimilation to optimize initial conditions. The simulation interval is from 8am EDT to 8pm EDT on July 20, 2004. Figure 2 shows the simulation domain and three selected AIRNOW stations. AIRNOW stations provide hourly observations of ground level ozone throughout the entire month of July 2004. To show the change of ozone in this time interval at one location, we choose three out of all AirNow stations. The ozone time series at these three stations are illustrated in Figure 3. From the figure, we can find that the assimilated lines are closer to observations than non-assimilated lines, which indicates the improvement in model predictions after data assimilation. This is also confirmed by the scatter and

Data Assimilation in Multiscale Chemical Transport Models

1031

quantile-quantile plots of Figure 4, which indicate that the correlation coefficient between model predictions and observations increases considerably from R2 = 0.15 for the original model to R2 = 0.68 after assimilation.

Fig. 2. Three selected stations where O3 time series are considered

(a) station A

(b) station B

(c) station C Fig. 3. Time series of ozone concentrations

(a) Original (R2=0.15)

(b) Assimilated (R2=0.68)

Fig. 4. Scatter plot and quantile-quantile plot of model-observations agreement

1032

L. Zhang and A. Sandu

5 Assessment of Three Optimization Methods We applied three optimization methods: L-BFGS, Nonlinear Conjugate Gradient and Hessian Free Newton for data assimilation in STEM and assess the performances of them. The principle of L-BFGS [11] is to approximate Hessian matrix G by a symmetric positive definite matrix H, and update H at each step using the information from the m most recent iterations. Nonlinear Conjugate Gradient method is an iterative method and generates a set of search directions { p0 , p1 ,…, p m } conjugating with each other for i ≠ j . The Fletcher-Reeves Conjugate Gradient method is used in this paper. Hessian Free Newton is an inexact Newton method in which the Hessian matrix is not available, and we use automatic differentiation or finite differences to approximate the products of the Hessian times a vector [12].

Fig. 5. Decrease of the cost function vs. the number of model runs for three methods

These methods are tested respectively to optimize initial concentrations. All the optimizations start at the same cost function of around 54800 and converge at about 16000 within 15 iterations, which proves that they are all able to solve data assimilation in STEM model. The difference lies in the number of model runs when they converge. For every model run, the STEM calls forward model and adjoint model to evaluate value and gradient of the cost function for optimization subroutine, so the more model runs, the more time needed in optimization. Figure 5 shows performances of these methods in terms of model runs they required. It is obvious that L-BFGS converges the fastest. We can conclude that of the three optimization methods L-BFGS is the best for data assimilation in STEM.

6 Conclusions STEM is a multiscale atmospheric chemical transport model and has been used for air quality simulation regionally. Model Predictions can be improved by the technique of data assimilation. Data assimilation allows combining the information from both observations and STEM forward and adjoint models to obtain best estimates of the three-dimensional distribution of tracers. Reanalyzed fields can be used to initialize air quality forecast runs and have the potential to improve air quality predictions of the STEM model. Therefore, STEM is closely correlated to data assimilation. In this

Data Assimilation in Multiscale Chemical Transport Models

1033

paper, we perform 4D-Var data assimilation over northeast of U.S. using the STEM model to optimize initial conditions. Both data and figures show the great improvement for simulation of STEM after data assimilation. Besides, we assess performance of three optimization methods that implement data assimilation in the STEM model. The results imply that L-BFGS best fits the STEM model of the three methods. Future work will focus on implementing second order adjoint model in STEM to provide Hessian of the cost function. In this way we can utilize some more accurate optimization methods for data assimilation. Acknowledgemets. This work was supported by the Houston Advanced Research Center (HARC) through the award H59/2005 managed by Dr. Jay Olaguer and by the National Science Foundation through the award NSF ITR AP&IM 0205198 managed by Dr. Frederica Darema.

References 1. Carmichael, G.R., Peters, L.K., Saylor R.D.: The STEM-II regional scale acid deposition and photochemical oxidant model - I. An overview of model development and applications. Atmospheric Environment 25A: 2077-2090, 1990. 2. Kalman, R.E.: A new approach to linear filtering and prediction problems. Trans. ASME, Ser. D: J. Basic Eng., 83:95-108, 1960. 3. Fisher, M. and Lary, D.J.: Lagrangian four-dimensional variational data assimilation of chemical species. Q.J.R. Meteorology, 121:1681-1704, 1995. 4. Khattatov, B.V., Gille, J. C., Lyjak, L.V., Brasseur, G. P., Dvortsov, V. L., Roche, A. E., and Walters, J.: Assimilation of photochemically active species and a case analysis of UARS data. Journal of Geophysical Research, 104:18715-18737, 1999. 5. Elbern, H. and Schmidt, H.: A 4D-Var chemistry data assimilation scheme for Eulerian chemistry transport modeling. Journal of Geophysical Research, 104(5):18583-18589, 1999. 6. Errera, Q. and Fonteyn, D.: Four-dimensional variational chemical assimilation of CRISTA stratospheric measurements. Journal of Geophysical Research, 106(D11):1225312265, 2001. 7. Wang, K.Y., Lary, D.J., Shallcross, D.E., Hall, S.M., and Pyle, J.A.: A review on the use of the adjoint method in four-dimensional atmospheric-chemistry data assimilation. Q.J.R. Meteorol. Soc., 127(576(Part B)):2181-2204, 2001. 8. Sandu, A., Daescu, D.N., Carmichael, G.R. and Chai, T.: Adjoint Sensitivity Analysis of Regional Air Quality Models. Journal of Computational Physics, Vol. 204: 222-252, 2005. 9. Carter, W.P.L.: Documentation of the SPARC-99 chemical mechanism for VOC reactivity assessment final report to California Air Resources Board. Technical Report. University of California at Riverside, 2000. 10. Damian, V., Sandu, A., Damian, M., Potra, F. and Carmichael, G.R.:The Kinetic PreProcessor KPP - A Software Environment for Solving Chemical Kinetic. Computers and Chemical Engineering, Vol. 26, No. 11: 1567-1579, 2002. 11. Liu, D.C. and Nocedal, J.: On the limited memory BFGS method for large-scale optimization. Math. Programming 45: 503-528, 1989. 12. Morales, J.L., and Nocedal, J.: Enriched Methods for Large-Scale Unconstrained Optimization. Computational Optimization and Applications, 21: 143-154, 2002.

Building a Dynamic Data Driven Application System for Hurricane Forecasting Gabrielle Allen Center for Computation & Technology and Department of Computer Science, Louisiana State University, Baton Rouge, LA 70803 [email protected] http://www.cct.lsu.edu

Abstract. The Louisiana Coastal Area presents an array of rich and urgent scientific problems that require new computational approaches. These problems are interconnected with common components: hurricane activity is aggravated by ongoing wetland erosion; water circulation models are used in hurricane forecasts, ecological planning and emergency response; environmental sensors provide information for models of different processes with varying spatial and time scales. This has prompted programs to build an integrated, comprehensive, computational framework for meteorological, coastal, and ecological models. Dynamic and adaptive capabilities are crucially important for such a framework, providing the ability to integrate coupled models with real-time sensor information, or to enable deadline based scenarios and emergency decision control systems. This paper describes the ongoing development of a Dynamic Data Driven Application System for coastal and environmental applications (DynaCode), highlighting the challenges of providing accurate and timely forecasts for hurricane events. Keywords: Dynamic data driven application systems, DDDAS, hurricane forecasting, event driven computing, priority computing, coastal modeling, computational frameworks.

1

Introduction

The economically important Louisiana Coastal Area (LCA) is one of the world’s most environmentally damaged ecosystems. In the past century nearly one-third of its wetlands have been lost and it is predicted that with no action by 2050 only one-third of the wetlands will remain. Beyond economic loss, LCA erosion has devastating effects on its inhabitants, especially in New Orleans whose location makes it extremely vulnerable to hurricanes and tropical storms. On 29th August 2005 Hurricane Katrina hit New Orleans, with storm surge and flooding resulting in a tragic loss of life and destruction of property and infrastructure. Soon after, Hurricane Rita caused similar devastation in the much less populated area of southwest Louisiana. In both cases entire communities were destroyed. To effectively model the LCA region, a new comprehensive and dynamic approach is needed including the development of an integrated framework for Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1034–1041, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Building a DDDAS for Hurricane Forecasting

1035

coastal and environmental modeling capable of simulating all relevant interacting processes from erosion to storm surge to ecosystem biodiversity, handling multiple time (hours to years) and length (meters to kilometers) scales. This framework needs the ability to dynamically couple models and invoke algorithms based on streamed sensor or satellite data, locate appropriate data and resources, and create necessary workflows on demand, all in real-time. Such a system would enable restoration strategies, improve ecological forecasting, sensor placement, control of water diversion for salinity, or predict/control harmful algal blooms, and support sea rescue and oil spill response. In extreme situations, such as approaching hurricanes, results from multiple coupled ensemble models, dynamically compared with observations, could greatly improve emergency warnings. These desired capabilities are included in the emerging field of Dynamic Data Driven Application Systems (DDDAS), which describes new complex, and inherently multidisciplinary, application scenarios where simulations can dynamically ingest and respond to real-time data from measuring devices, experimental equipment, or other simulations. In these scenarios, simulation codes are in turn also able to control these varied inputs, providing for advanced control loops integrated with simulation codes. Implementing these scenarios requires advances in simulation codes, algorithms, computer systems and measuring devices. This paper describes work in the NSF funded DynaCode project to create a general DDDAS toolkit with applications in coastal and environmental modeling; a futuristic scenario (Sec. 2) provides general needs (Sec. 3) for components (Sec. 4). The rest of this section describes ongoing coastal research and development programs aligned with DynaCode, forming a scientific research foundation: Lake Pontchartrain Forecast System. During Hurricane Katrina storm surge water from Lake Pontchartrain flooded New Orleans via breaches in outfall canals. The Army Corp of Engineers plans to close Interim Gated Structures at canal mouths during future storms, but this takes several hours, cannot occur in strong winds, and must be delayed as long as possible for storm rain water drainage. The Lake Pontchartrain Forecast System (LPFS), developed by UNC and LSU, provides timely information to the Army Corp to aid in decision making for gate closing. LPFS is activated if a National Hurricane Center advisory places a storm track within 271 nautical miles of the canals, and an ensemble of storm surge (ADCIRC) runs is automatically deployed across the Louisiana Optical Network Initiative (LONI, http://www.loni.org) where mechanisms are in place to ensure they complete within two hours and results are provided to the Corp. Louisiana CLEAR Program. The Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) program is developing ecological and predictive models to connect ecosystem needs with engineering design. CLEAR has developed a modeling tool to evaluate restoration alternatives using a combination of modules that predict physical processes, geomorphic features, and ecological succession. In addition, simulation models are being developed to provide an ecosystem forecasting system for the Mississippi Delta. This system will address questions such as what will happen to the Mississippi River Deltaic Plain under different scenarios of restoration alternatives, and what will be the benefits to society?

1036

G. Allen

Fig. 1. Timely forecasts of the effects of hurricanes and tropical storms is imperative for emergency planning. The paths and intensity of the devastating hurricanes Katrina, Rita and Wilma [left] during 2005, as with other storms, are forecast from five days before expected landfall using different numerical and statistical models [right]. Model validity depends on factors such as the storm properties, location and environment.

SURA Coastal Ocean Observing and Prediction (SCOOP): The SCOOP Program [1] (http://scoop.sura.org) involves a diverse collaboration of coastal modelers and computer scientists working with government agencies to create an open integrated network of distributed sensors, data and computer models. SCOOP is developing a broad community-oriented cyberinfrastructure to support coastal research activities, for which three key scenarios involving distributed coastal modeling drive infrastructure development: 24/7 operational activities where various coastal hydrodynamic models (with very different algorithms, implementations, data formats, etc) are run on a daily basis, driven by winds from different atmospheric models; retrospective modeling where researchers can investigate different models, historical data sets, analysis tools etc; and most relevant for DDDAS, hurricane forecasting. Here, severe storm events initiate automated model workflows triggered by National Hurricane Center advisories, high resolution wind fields are generated which then initiate ensembles of hydrodynamic models. The resulting data fields are distributed to the SCOOP partners for visualization and analysis, and are placed in a highly available archive [2].

2

Data Driven Hurricane Forecast Scenario

When advisories from the National Hurricane Center indicate that a storm may make landfall in a region impacting Louisiana, government officials, based on information provided by model predictions (Fig. 1) and balancing economic and social factors, must decide whether to evacuate New Orleans and surrounding towns and areas. Such advisories are provided every six hours, starting from some five days before the storm is predicted to make landfall. Evacuation notices for large cities like New Orleans need to be given 72 hours in advance.

Building a DDDAS for Hurricane Forecasting

1037

Here we outline a complex DDDAS scenario which provides hurricane predictions using ensemble modeling: A suddenly strengthening tropical depression tracked by satellite changes direction, worrying officials. An alert is issued to state researchers and an advanced autonomic modeling system begins the complex process of predicting and validating the hurricane path. Realtime data from sensor networks on buoys, drilling platforms, and aircraft, across the Gulf of Mexico, together with satellite imagery, provide varied resolution data on ocean temperature, current, wave height, wind direction and temperature. This data is fed continuously into a ensemble modeling tool which, using various optimization techniques from a standard toolkit and taking into account resource information, automatically and dynamically task farms dozens of simulations, monitored in real-time. Each simulation represents a complex workflow, with closely coupled models for atmospheric winds, ocean currents, surface waves and storm surges. The different models and algorithms within them, are dynamically chosen depending on physical conditions and required output sensitivity. Data assimilation methods are applied to observational data for boundary conditions and improved input data. Validation methods compare data between different ensemble runs and live monitoring data, with data tracking providing additional information for dynamic decisions. Studying ensemble data from remotely monitored simulations, researchers steer computations to ignore faulty or missing input data. Known sensitivity to uncertain sensor data is propagated through the coupled ensemble models quantifying uncertainty. Sophisticated comparison with current satellite data is made with synthesized data from ensemble models to determine in real-time which models/components are most reliable, and a final high resolution model is run to predict 72 hours in advance the detailed location and severity of the storm surge. Louisiana’s Office of Emergency Preparedness disseminates interactive maps of the projected storm surge and initiates contingency plans including impending evacuations and road closures.

3

System Requirements

Such scenarios require technical advances across simulation codes, algorithms, computer systems and measuring devices. Here we focus on different technical issues related to the various components of a DDDAS simulation toolkit: – Data Sources & Data Management. Data from varied sources must be integrated with models, e.g. wind fields from observational sources or computer models. Such data has different uncertainties, and improving the quality of input data, on demand, can lower forecast uncertainty. The ability to dynamically create customized ensembles of wind fields is needed, validated and improved with sensor data, for specified regions, complete with uncertainty functions propagated through models. Sensor data from observing systems must be available for real-time verification and data assimilation. Services for finding, transporting and translating data must scale to complex workflows of coupled interacting models. Emergency and real-time computing scenarios demand highly available data sources and data transport that

1038









G. Allen

is fault tolerant with guaranteed quality of service. Leveraging new optical networks requires mechanisms to dynamically reserve and provision networks and data scheduling capabilities. Metadata describing the huge amounts of distributed data is also crucial. and must include provenance information. Model-Model Coupling and Ensembles. Cascades of coupled circulation, wave and transport models are needed. Beyond defining interfaces, methods are needed to track uncertainties, create and optimize distributed workflows as the storm approaches, and invoke models preferentially, based on algorithm performance and features indicated by input data. Cascades of models, with multiple components at each stage, lead to potentially hundreds of combinations where it is not known a priori which combinations give the best results. Automated and configurable ensemble modeling across grid resources, with continuous validation of results against observations and models, is critical for dynamically refining predictions. Algorithms are needed for dynamic configuration and creation of ensembles to provide predictions with a specifiable, required accuracy. In designing ensembles, the system must consider the availability and “cost” of resources, which may also depend on the threat and urgency e.g. with a Category 5 Hurricane requiring a higher quality of service than a Category 3 Hurricane. Steering. Automated steering is needed to adjust models to physical properties and the system being modeled, e.g. one could steer sensor inputs for improved accuracy. The remote steering of model codes, e.g. to change output parameters to provide verification data, or to initiating the reading of new improved data, will require advances to the model software. Beyond the technical capabilities for steering parameters (which often requires the involvement of domain experts), the steering mechanism must require authentication, with changes logged to ensure reproducibility. Visualization and Notification. Detailed visualizations, integrating multiple data and simulation sources showing the predicted effect of a storm are important for scientific understanding and public awareness (e.g. of the urgency of evacuation, or the benefits of building raised houses). Interactive and collaborative 3-D visualization for scientific insight will stress high speed networks, real-time algorithms and advanced clients. New visualizations of verification analysis and real-time sensor information are needed. Notification mechanisms to automatically inform scientists, administrators and emergency responders must be robust and configurable. Automated systems require human intervention and confirmation at different points, and the system should allow for mechanisms requiring authenticated response with intelligent fallback mechanisms. Priority and Deadline Based Scheduling. Dealing with unpredictable events, and deploying multiple models concurrently with data streams, provides new scheduling and reservation requirements: priority, deadline-based, and co-scheduling. Computational resources must be available on demand, with guaranteed deadlines for results; multiple resources must be scheduled simultaneously and/or in sequence. Resources go beyond traditional computers, including archival data, file systems, networks and visualization devices.

Building a DDDAS for Hurricane Forecasting

1039

Policies need to be adopted at computing centers that enable event-driven computing and data streaming; computational resources of various kinds need to be available on demand, with policies reflecting the job priority.

4

DynaCode Components

In the DynaCode project this functionality is being developed by adapting and extending existing software packages, and building on the SCOOP and LPFS scenarios. Collectively, the packages described below form the basis for a “DDDAS Toolkit”, designed for generic DDDAS applications, with specific drivers including the hurricane forecast scenario: – Cactus Framework. Cactus [3], a portable, modular software environment for developing HPC applications, has already been used to prototype DDDAS-style applications. Cactus has numerous existing capabilities relevant for DDDAS, including extreme portability, flexible and configurable I/O, an inbuilt parameter steering API and robust checkpoint/restart capabilities. Cactus V5, currently under development, will include additional crucial features, including the ability to expose individual ‘thorn’ (component) methods with web service interfaces, and to interoperate with other framework architectures. Support for the creation and archiving of provenance data including general computational and domain specific information is being added, along with automated archiving of simulation data. – User Interfaces. An integrated, secure, web user interface developed with the GridSphere portal framework builds on existing relevant portlets [4]. New portlets include Threat Level and Notification, Ensemble Monitoring, Ensemble Track Visulization and Resource Status. A MacOSX “widget” (Fig. 2, left) displays hurricane track information and system status. – Data Management. Through the SCOOP project a highly reliable coastal data archive [2] has been implemented at LSU, with 7TB of local storage and 7TB of remote storage (SDSC SRB) for historical data. This archive was designed to ingest and provide model (surge, wave, wind) and observational (sensor, satellite) data, and is integrated with a SCOOP catalogue service at UAH. To support dynamic scenarios a general trigger mechanism was added to the archive which can be configured to perform arbitrary tasks on arrival of certain files. This mechanism is used to drive ensemble configuration and deployment, notification and other components. DynaCode is partnered with the NSF funded PetaShare project which is developing new technologies for distributed data sharing and management, in particular data-aware storage systems and data-aware schedulers. – Ensembles. Ensembles for DynaCode scenarios are currently deployed across distributed resources with a management component executed on the archive machine. The system is designed to support rapid experimentation rather than complex workflows, and provides a completely data-driven architecture with the ensemble runs being triggered by the arrival of input wind

1040

G. Allen

Fig. 2. [Left] The LPFS system is activated if any ensemble member places the hurricane track within 271 nautical miles of the canal mouths (inside the inner circle). [Right] A threat level system allows the threat level to be changed by trusted applications or scientists. This triggers the notification of system administrators, customers and scientists and the setting of policies on compute resources for appropriately prioritized jobs. The diagram shows a portal interface to the threat level system.

files. As ensemble runs complete, results are fed to a visualization service and also archived in the ROAR Archive [2]. Metadata relating to the run and the result set is fed into the catalog developed at UAH via a service interface. – Monitoring. The workflow requires highly reliable monitoring to detect failures and prompt corrective action. Monitoring information (e.g. data transfers/job status) is registered by the various workflow components and can be viewed via portal interfaces. A spool mechanism is used to deliver monitoring information via log files to a remote service, providing high reliability and flexibility. This ensures the DynaCode workflow will not fail due to unavailability of the monitoring system. Also, the workflow executes faster than a system where monitoring information is transported synchronously. – Notification. A general notification mechanism sends messages via different mechanisms to configurable role-based groups. The system currently supports email, instant messaging, and SMS text messages, and is configurable via a GridSphere portlet. The portlet behaves as a messaging server that receives updates from e.g. the workflow system and relays messages to subscribers. Subscribers can belong to different groups that determine the information content of messages they receive, allowing messages to be customized for e.g. system administrators, scientists or emergency responders. – Priority Scheduling & Threat Levels. Accurate forecasts of hurricane events, involving large ensembles, need to be completed quickly and reliably with specific deadlines. To provide on-demand resources the DynaCode workflow makes use of policies as well as software. On the large scale resources of LONI and CCT, the queues have been configured so that it is possible to preempt currently running jobs and free compute resources at extremely short notice. Large queues are reserved for codes that can checkpoint and

Building a DDDAS for Hurricane Forecasting

1041

restart. These queues share compute nodes with preemptive queues that preempt jobs in the ‘checkpoint’ queues when they receive jobs to run. Software such as SPRUCE (http://spruce.teragrid.org/) is being used to provide elevated priority and preemptive capabilities to jobs that hold special tokens reducing user management burden from system administrators. A “Threat Level” service has been developed; trusted applications or users can set a global threat level to red, amber, yellow or green (using web service or portal interfaces), depending on the perceived threat and urgency (Fig. 2). Changes to the threat level triggers notification to different role groups, and is being integrated with the priority scheduling system and policies. – Co-allocation. DynaCode is partnering with the NSF Enlightened Computing project which is developing application-enabling middleware for optical networks. The HARC co-allocator, developed through the Enlightened-DynaCode collaboration, can already allocate reservations on compute resources and optical networks, and is being brought into production use on the LONI network to support DynaCode and other projects. Acknowledgments. The author acknowledges contributions from colleagues in the SCOOP, CLEAR and LPFS projects and collaborators at LSU. This work is part of the NSF DynaCode project (0540374), with additional funding from the SURA Coastal Ocean Observing and Prediction (SCOOP) Program (including ONR Award N00014-04-1-0721, NOAA Award NA04NOS4730254). Computational resources and expertise from LONI and CCT are gratefully acknowledged.

References 1. Bogden, P., Allen, G., Stone, G., Bintz, J., Graber, H., Graves, S., Luettich, R., Reed, D., Sheng, P., Wang, H., Zhao, W.: The Southeastern University Research Association Coastal Ocean Observing and Prediction Program: Integrating Marine Science and Information Technology. In: Proceedings of the OCEANS 2005 MTS/IEEE Conference, Sept 18-23, 2005, Washington, D.C. (2005) 2. MacLaren, J., Allen, G., Dekate, C., Huang, D., Hutanu, A., Zhang, C.: Shelter from the Storm: Building a Safe Archive in a Hostile World. In: Proceedings of the The Second International Workshop on Grid Computing and its Application to Data Analysis (GADA’05), Agia Napa, Cyprus, Springer Verlag (2005) 3. Goodale, T., Allen, G., Lanfermann, G., Mass´ o, J., Radke, T., Seidel, E., Shalf, J.: The Cactus framework and toolkit: Design and applications. In: High Performance Computing for Computational Science - VECPAR 2002, 5th International Conference, Porto, Portugal, June 26-28, 2002, Berlin, Springer (2003) 197–227 4. Zhang, C., Dekate, C., Allen, G., Kelley, I., MacLaren, J.: An Application Portal for Collaborative Coastal Modeling. Concurrency Computat.: Pract. Exper. 18 (2006)

A Dynamic Data Driven Wildland Fire Model Jan Mandel1,2 , Jonathan D. Beezley1,2 , Lynn S. Bennethum1 , Soham Chakraborty3, Janice L. Coen2 , Craig C. Douglas3,5 , Jay Hatcher3 , Minjeong Kim1 , and Anthony Vodacek4 1

2

University of Colorado at Denver and Health Sciences Center, Denver, CO 80217-3364, USA National Center for Atmospheric Research, Boulder, CO 80307-3000, USA 3 University of Kentucky, Lexington, KY 40506-0045, USA 4 Rochester Institute of Technology, Rochester, NY 14623-5603, USA 5 Yale University, New Haven, CT 06520-8285, USA

Abstract. We present an overview of an ongoing project to build DDDAS to use all available data for a short term wildfire prediction. The project involves new data assimilation methods to inject data into a running simulation, a physics based model coupled with weather prediction, on-site data acquisition using sensors that can survive a passing fire, and on-line visualization using Google Earth.

1

Introduction

DDDAS for a short-term wildland fire prediction is a challenging problem. Techniques standard in geophysical applications generally do not work because the nonlinearity of fire models are much stronger than those of the atmosphere or the ocean; data is incomplete; and it is not clear which model is the best for physical representation and faster than real time speed.

2

Fire Model

The goal in wildland fire modeling is to predict the behavior of a complex system involving many processes and uncertain data by a physical model that reproduces important dynamic behaviors. Our overall approach is to create a mathematical model at the scales at which the dominant behaviors of the system occur and the data exist. Perhaps the simplest PDE based wildland fire model [2] is of the form dT → = ∇ · (k∇T ) + − v · ∇T + A (Sr (T ) − C0 (T − Ta )) , dt dS = −CS Sr (T ) , dt

(1) (2)

where the reaction rate is a modified Arrhenius rate, r(T ) = e−B/(T −Ta ) . Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1042–1049, 2007. c Springer-Verlag Berlin Heidelberg 2007 

(3)

A Dynamic Data Driven Wildland Fire Model

1043

1000

Temperature(C)

800

600

400

200

0 1.125

1.175

1.225 1.275 time(seconds)

1.325 4 x 10

Fig. 1. Measured time-temperature profile (dotted line) in a wildland fire at a fixed sensor location, digitized from [1], and computed profile (solid line) [2]. Coefficients of the model were identified by optimization to match to measured profile.

Eq. (1) represents a 2D balance of energy in a fire layer of some unspecified finite vertical thickness, (2) represents the balance of fuel, T is the temperature of the fire layer, r(T ) ∈ [0, 1] is the reaction rate, assumed to depend only on the temperature, S ∈ [0, 1] is the fuel supply mass fraction (the relative amount of fuel remaining), k is the diffusion coefficient, A is the temperature rise per second at the maximum burning rate with full initial fuel load and no cooling present, B is the proportionality coefficient in the modified Arrhenius law, C0 is the scaled coefficient of the heat transfer to the environment, CS is the fuel → relative disappearance rate, Ta is the ambient temperature, and − v is the given wind speed from the atmosphere. Physics-based models of the form similar to (1-3) are known [3,4,5]. The system (1-2) with the reaction rate (3) is formally the same as combustion equations of premixed fuel, with cooling added. Our interpretation is that the equations are a rough approximation of the aggregated behavior of small-scale random combustion. We are going to add variables and equations only as needed to match the observed fire behavior. Possible future extension include pyrolysis, multiple fuels, crown fire, and moisture (treated as a fuel with negative heat content). The reaction rate (3) has been modified to be exactly zero at ambient temperature (according to chemical kinetics, the reaction rate should be zero only at absolute zero temperature); consequently equations (1-3) admit traveling wave solutions. The temperature in the traveling wave has a sharp leading edge, followed by an exponentially decaying cool-down trailing edge (Fig. 1). The wave speed can be found numerically [5], but no mathematical proof of the existence of traveling waves and their speed for system (1-3) seems to be known. Since weather is a very significant influence on fire and fire in turn has influence on weather, coupling of the fire model with the Weather Research Forecast Model

1044

J. Mandel et al.

(WRF) [6,7] is in progress [8]. It is now well established that fire dynamics can be understood only by modeling the atmosphere and the fire together, e.g., [9].

3

Coefficient Identification

The reason for writing the coefficients of model (1-3) in that particular form is that it is basically hopeless to use physical coefficients and expect a reasonable solution. The fuel is heterogeneous, the fire blends into the atmosphere, and it is unclear what exactly are, e.g., the diffusion and the heat loss coefficients. Instead, we consider information that can be reasonably obtained. For example, consider the heat balance term at full fuel load (S = 1), f (T ) = r (T ) − C0 (T − Ta ) . Generally, there are three temperatures which cause f (T ) to be zero (where the heat created by the reaction is balanced by the cooling): Ta , arising from our modification of the Arrhenius rate (otherwise there is one zero just above Ta because some reaction is always present); the autoignition temperature; and finally the “high temperature regime” [10], which is the hypothetical steady burning temperature assuming fuel is constantly replenished. Substituting reasonable values for the latter two temperatures allows us to determine reasonable approximate values of the coefficients B and C0 . Assuming zero wind, we can then determine the remaining coefficients in 1D dynamically from a measured temperature profile in an actual wildland fire [1]. Reasonable values of the remaining coefficients are obtained by (i) passing to a nondimensional form and matching nondimensional characteristics of the temperature profile, such as the ratio of the widths of the leading and the trailing edge and (ii) using the traveling wave speed and intensity to match the scale. Once reasonable starting values are known, the coefficients can be further refined by optimization to match the measured temperature profile (Fig. 1). See [2] for details. Identification of coefficients in the presence of wind in the model is in progress [11].

4

Numerical Solution

The coefficients identified from a 1D temperature profile were used in a 2D model, discretized by standard central finite differences with upwinding of the convection term. Implicit time stepping by the trapezoidal method (Crank-Nicholson) was used for stability in the reaction term and to avoid excessively small time steps. The large sparse system of linear equations in every time step was solved by Newton’s method with GMRES as the linear solver, preconditioned by elimination of the fuel variables (which are decoupled in space) and FFT. One advantage of this approach is that even after more fuels are added, the fuel variables can still be eliminated at every node independently, and the resulting system has the same sparsity structure as the heat equation alone. Typically, mesh steps of 1m – 5m and time steps of 1s – 5s are required to resolve the moving fire front and to get a reasonable simulation [11].

A Dynamic Data Driven Wildland Fire Model

5

1045

Data Assimilation

Data assimilation is how the data is actually injected into a running model. We have chosen a probabilistic approach to data assimilation, which is common in geosciences [12]. The ensemble filter method used here is one possible approach. This method estimates the system state from all data available by approximating the probability distribution of the state by a sample, called an ensemble. The ensemble members are advanced in time until a given analysis time. At the analysis time, the probability density, now called the prior or the forecast, is modified by accounting for the data, which are considered to be observed at that time. This is done by multiplying the prior density by data likelihood and normalizing (the Bayesian update). The new probability density is called the posterior or the analysis density. The data likelihood is a function of state. It is equal to the density of the probability that the data could have been obtained for the given state. It can be found from the probability density of data error, which is assumed to be known (every measurement must be accompanied by an error estimate or it is meaningless), and an observation function (sometimes called the forward operator in an inverse problem context). The argument of the observation function is a model state and its value is what the correct value of the data should be for that state. The Ensemble Kalman Filter (EnKF) [13,14] works by forming the analysis ensemble as linear combinations of the forecast ensemble, and the Bayesian update is implemented by linear algebra operating on the matrix created from ensemble members as columns. In a wildland fire application, EnKF produces nonphysical states that result in instability and model breakdown [2,15]. The linear combination of several states with a fire in slightly different locations cannot match data with a fire in another location. In a futile attempt to match the data, EnKF makes the analysis ensemble out of crazy linear combinations of the forecast ensemble. These can be fatal immediately. Even if they are not, the analysis ensemble tends to bear little resemblance to the data. This can be ameliorated to some degree by penalization of nonphysical states [2,15] but not well enough to produce a reliable method: the filter is capable of only a small adjustment of the fire location, and its parameters have to be finely tuned for acceptable results, especially the data variance is required to be artificially large. For this reason, we have developed a new method that updates the location of firelines directly, called the morphing ensemble filter [16,17], which is based on techniques borrowed from registration in image processing. Given two images as pixel values on the same domain, the registration problem is to find a mapping of the domain that turns one of the given images into the other with a residual (the remaining difference) that is as small as possible. We write this mapping as the identity plus a registration mapping. The registration mapping should be as small and as smooth as possible. Hence, one natural way to solve the automatic registration problem is by optimization, and we use a method based on [18]. Once the registration mapping and the residual are found, we can construct intermediate images between the two images by applying the registration mapping

1046

J. Mandel et al.

(a)

(b)

(c)

Fig. 2. Data assimilation by the morphing ensemble filter [16]. Contours are at 800K, indicating the location of the fireline. The reaction zone is approximately inside the moonshaped curve. This fireline shape is due to the wind. The forecast ensemble (a) was obtained by a perturbation of an initial solution. The fire in the simulated data (b) is intentionally far away from the fire in the initial solution. The data for the EnKF engine itself consisted of the transformed state obtained by registration of the image. The analysis (c) shows the adjusted location of the fire in the analysis ensemble members.

and adding the residual, both multiplied by some number between zero and one. Generalizing this observation, we can choose one fixed state with a fire in some location, and register all ensemble members. The registration mappings and the residuals then constitute a transformed ensemble. The morphing ensemble filter consists of applying the EnKF to the transformed ensemble and thus using intermediate states rather than linear combinations. This results in an adjustment of both the intensity and the position of the fire. Now the fireline in the data can be anywhere in the domain, and the data variance can be small without causing filter divergence. Fire data in the form of an image can be assimilated by registering the image and then using the transformed image as the observation. The observation function is then only a pointwise transformation of the model state to gridded values and interpolation. In the general case, the transformed observation function is defined by composition of functions, and so it is a highly nonlinear function of the registration mapping. So, assimilation of weather station data and sensor data will be more complicated and can result in strongly non-Gaussian multimodal probability distributions. For example, a sensor can read low temperature either because the fire did not get there yet or because it has already passed, so the analysis density after assimilating one sensor reading would be concentrated around these two possibilities. Another view is that the morphing EnKF works by transforming the state so that after the transformation, the probability distribution is closer to Gaussian. We have also developed another new technique, called predictor-corrector filters [17], to deal with the remaining non-Gaussianity.

A Dynamic Data Driven Wildland Fire Model

6

1047

Data Acquisition

Currently, we are using simple simulated data, as described in the previous section. In the future, data will come from fixed sensors that measure temperature, radiation, and local weather conditions. The fixed Autonomous Environmental Sensors (AESs), positioned so as to provide weather conditions near a fire, are mounted at various heights above the ground on a pole with a ground spike (Fig. 3a). This type of system will survive burnovers by low intensity fires. The temperature and radiation measurements provide a direct indication of the fire front passage and the radiation measurement can also be used to determine the intensity of the fire [20]. The sensors transmit data and can be reprogrammed by radio. Data will come also from images taken by sensors on either satellites or airplanes. Three wavelength infrared images can then be processed using a variety of algorithms to extract which pixels contain a signal from fire and to determine the energy radiated by the fire and even the normal velocity of the fireline (Fig. 3b). The data is related to the model by an observation function. Currently, we are using simple simulated observation functions, as described in the previous section. Our software framework supports using multiple types of observation functions (image, weather station, etc.). Each data item must carry information about which observation function is to be used and any requisite parameters (coordinates, scaling factors, etc.) in metadata.

(a)

(b)

Fig. 3. (a) AES package deployed on the Tripod Fire in Washington (2006). The unit shown has been burned over by the fire but is still functional. The datalogger package is buried a small distance beneath the surface to shield it from the effects of the fire. (b) Airborne image processed to extract the location and propagation vector of the fireline (reproduced from [19] by permission).

1048

J. Mandel et al.

(a)

(b)

Fig. 4. The Google Earth Fire Layering software tool. (a) Closeup on a single fire (b) 3D view.

7

Visualization

Our primary tool for visualization is storing gridded data from the model arrays into files for visualization in Matlab. Our Google Earth Fire visualization system (Fig. 4) greatly simplifies map and image visualization and will be used for model output in the future. The user can control the viewing perspective, zooming into specific sites, and selecting the time frame of the visualization within the parameters of the current available simulation. Google Earth is quickly becoming a de-facto standard and wildland fire visualizations in Google Earth are now available from several commercial and government sources, e.g., [21,22].

Acknowledgment This research has been supported by the National Science Foundation under grants CNS 0325314, 0324989, 0324988, 0324876, and 0324910.

References 1. Kremens, R., Faulring, J., Hardy, C.C.: Measurement of the time-temperature and emissivity history of the burn scar for remote sensing applications. Paper J1G.5, Proceedings of the 2nd Fire Ecology Congress, Orlando FL, American Meteorological Society (2003) 2. Mandel, J., Bennethum, L.S., Beezley, J.D., Coen, J.L., Douglas, C.C., Franca, L.P., Kim, M., Vodacek, A.: A wildfire model with data assimilation. CCM Report 233, http://www.math.cudenver.edu/ccm/reports/rep233.pdf (2006) 3. Asensio, M.I., Ferragut, L.: On a wildland fire model with radiation. International Journal for Numerical Methods Engineering 54 (2002) 137–157 4. Grishin, A.M.: General mathematical model for forest fires and its applications. Combustion Explosion and Shock Waves 32 (1996) 503–519

A Dynamic Data Driven Wildland Fire Model

1049

5. Weber, R.O., Mercer, G.N., Sidhu, H.S., Gray, B.F.: Combustion waves for gases (Le = 1) and solids (Le → ∞). Proceedings of the Royal Society of London Series A 453 (1997) 1105–1118 6. Michalakes, J., Dudhia, J., Gill, D., Klemp, J., Skamarock, W.: Design of a nextgeneration weather research and forecast model. Towards Teracomputing: proceedings of the Eighth Workshop on the Use of Parallel Processors in Meteorology, European Center for Medium Range Weather Forecasting, Reading, U.K., November 16-20, 1998. ANL/MCS preprint number ANL/MCS-P735-1198 (1998) 7. WRF Working Group: Weather Research Forecasting (WRF) Model. http://www.wrf-model.org (2005) 8. Beezley, J.D.: Data assimilation in coupled weather-fire models. Ph.D. Thesis, in preparation (2008) 9. Coen, J.L.: Simulation of the Big Elk Fire using using coupled atmosphere-fire modeling. International Journal of Wildland Fire 14 (2005) 49–59 10. Frank-Kamenetskii, D.A.: Diffusion and heat exchange in chemical kinetics. Princeton University Press (1955) 11. Kim, M.: Numerical modeling of wildland fires. Ph.D. Thesis, in preparation (2007) 12. Kalnay, E.: Atmospheric Modeling, Data Assimilation and Predictability. Cambridge University Press (2003) 13. Evensen, G.: Sequential data assimilation with nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. Journal of Geophysical Research 99 (C5) (1994) 143–162 14. Houtekamer, P., Mitchell, H.L.: Data assimilation using an ensemble Kalman filter technique. Monthly Weather Review 126 (1998) 796–811 15. Johns, C.J., Mandel, J.: A two-stage ensemble Kalman filter for smooth data assimilation. Environmental and Ecological Statistics, in print. CCM Report 221, http://www.math.cudenver.edu/ccm/reports/rep221.pdf (2005) 16. Beezley, J.D., Mandel, J.: Morphing ensemble Kalman filters. CCM Report 240, http://www.math.cudenver.edu/ccm/reports/rep240.pdf (2007) 17. Mandel, J., Beezley, J.D.: Predictor-corrector and morphing ensemble filters for the assimilation of sparse data into high dimensional nonlinear systems. 11th Symposium on Integrated Observing and Assimilation Systems for the Atmosphere, Oceans, and Land Surface (IOAS-AOLS), CD-ROM, Paper 4.12, 87th American Meterological Society Annual Meeting, San Antonio, TX, January 2007. 18. Gao, P., Sederberg, T.W.: A work minimization approach to image morphing. The Visual Computer 14 (1998) 390–400 19. Ononye, A.E., Vodacek, A., Saber, E.: Automated extraction of fire line parameters from multispectral infrared images. Remote Sensing of Environment (to appear) 20. Wooster, M.J., Zhukov, B., Oertel, D.: Fire radiative energy for quantitative study of biomass burning: derivation from the BIRD experimental satellite and comparison to MODIS fire products. Remote Sensing of Environment 86 (2003) 83–107 21. NorthTree Fire International: Mobile Mapping/Rapid Assessment Services. http://www.northtreefire.com/gis/ (2007) 22. USGS: RMGSC - Web Mapping Applications for the Natural Sciences. http://rockyitr.cr.usgs.gov/rmgsc/apps/Main/geiDownloads.html (2007)

Ad Hoc Distributed Simulation of Surface Transportation Systems R.M. Fujimoto, R. Guensler, M. Hunter, K. Schwan, H.-K. Kim, B. Seshasayee, J. Sirichoke, and W. Suh Georgia Institute of Technology, Atlanta, GA 30332 USA {fujimoto@cc, randall.guensler@ce, michael.hunter@ce, schwan@cc}.gatech.edu

Abstract. Current research in applying the Dynamic Data Driven Application Systems (DDDAS) concept to monitor and manage surface transportation systems in day-to-day and emergency scenarios is described. This work is focused in four, tightly coupled areas. First, a novel approach to predicting future system states termed ad hoc distributed simulations has been developed and is under investigation. Second, on-line simulation models that can incorporate real-time data and perform rollback operations for optimistic ad hoc distributed simulations are being developed and configured with data corresponding to the Atlanta metropolitan area. Third, research in the analysis of real-time data is being used to define approaches for transportation system data collection that can drive distributed on-line simulations. Finally, research in data dissemination approaches is examining effective means to distribute information in mobile distributed systems to support the ad hoc distributed simulation concept. Keywords: surface transportation systems, ad hoc distributed simulations, rollback operations.

1 Introduction The Vehicle-Infrastructure Integration (VII) initiative by government agencies and private companies is deploying a variety of roadside and mobile sensing platforms capable of collecting and transmitting transportation data [1-3]. With the ongoing deployment of vehicle and roadside sensor networks, transportation planners and engineers have the opportunity to explore new approaches to managing surface transportation systems, offering the potential to allow the creation of more robust, efficient transportation infrastructures than was possible previously. Effective and efficient system management will require real-time determinations as to which data should be monitored, and at what resolutions. Distributed simulations offer the ability to predict future system states for use in optimizing system behaviors both in day-today traffic conditions as well as in times of emergency, e.g., under evacuation scenarios. Data collection, data processing, data analysis, and simulations performed by system agents (sub-network monitoring systems, base stations, vehicles, etc.) will lessen communication bandwidth requirements and harness surplus computing Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1050 – 1057, 2007. © Springer-Verlag Berlin Heidelberg 2007

Ad Hoc Distributed Simulation of Surface Transportation Systems

1051

capacity. Middleware to manage the distributed network, synchronize data and results among autonomous agents, and resolve simulation output conflicts between agents using disparate data sets become critical activities in such a system. Dynamic, datadriven application systems (DDDAS) offer the potential to yield improved efficiencies in the system that can reduce traffic delays and congestion, pollution, and ultimately, save lives during times of crisis. We are addressing this challenge through a distributed computing and simulation approach that exploits in-vehicle computing and communication capabilities, coupled with infrastructure-based deployments of sensors and computing equipment. Specifically, we envision a system architecture that includes in-vehicle computing systems, roadside compute servers (e.g., embedded in traffic signal controllers) as well as servers residing in traffic management centers (TMCs). The remaining sections provide an overview of specific research examining key elements of this system. The next section describes a concept called ad hoc distributed simulations that are used to project future system states.

2 Ad Hoc Distributed Simulations Consider a collection of in-vehicle simulations that are interconnected via wireless links and (possibly) wired network infrastructure. Individually, each simulation only models a portion of the traffic network – that which is of immediate interest to the “owner” of the simulator, Figure 1. IN-VEHILCE SIMULATION

IN-VEHILCE SIMULATION

IN-VEHILCE SIMULATION

IN-VEHILCE SIMULATION

Individual Vehicles Simulate Local Area of Interest

Fig. 1. In-Vehicle Simulation

Collectively, these simulations could be used to create a kind of distributed simulation system with the ability to make forecasts concerning the entire transportation infrastructure as a whole. One can envision combining in-vehicle simulators with simulations running within the roadside infrastructure, e.g., within traffic signal controller cabinets, simulations combining sub-regions of the transportation network, and simulations running in traffic management centers to create a large-scale model of a city’s transportation system, as shown in Figure 2.

1052

R.M. Fujimoto et al. In-Vehicle Simulations

Roadside Server

In-Vehicle Simulations

Roadside Server

In-Vehicle Simulations

Roadside Server

Area Server Area Server Area Server

Regional Server

Fig. 2. Ad Hoc Distributed Simulation Structure

We term a collection of autonomous, interacting simulations tied together in this fashion an ad hoc distributed simulation. Like a conventional distributed simulation, each simulator within an ad hoc distributed simulation models a portion of the overall system, and simulators exchange time stamped state information to collectively create a model of the overall system. However, in a conventional distributed simulation the system being modeled is designed in a top-down fashion. Specifically, the system is neatly partitioned into non-overlapping elements, e.g., geographic regions, and a process is assigned to model each element. By contrast, an ad hoc distributed simulation is created in a bottom-up fashion with no overarching intelligence governing the partitioning and mapping of the system to processes. Rather, the distributed simulation is constructed in an “ad hoc” fashion, in much the same way an arbitrary collection of mobile radios join together to form an ad hoc wireless network. The elements of the physical system modeled by different simulators in an ad hoc distributed simulation may overlap, leading to possibly many duplicate models of portions of the system, as seen in Figure 1. Other parts of the system may not be modeled at all. For example, an in-vehicle transportation simulator may be only modeling the portion of the road network along the vehicle’s intended path to reach its destination. Thus, ad hoc distributed simulations differ in important fundamental ways from conventional distributed simulations. Ad hoc distributed simulations are on-line simulation programs, meaning they are able to capture the current state of the system through measurement, and then execute forward as rapidly as possible to project a future state of the system. By assumption, each of the simulators making up the distributed simulation can simulate some portion of the system faster than real time. Ad hoc distributed simulations require a synchronization protocol to coordinate interactions among other simulations. For this purpose we have developed an optimistic (rollback-based) synchronization protocol designed for use in these systems. Each in-vehicle simulator utilizes information concerning traffic conditions and predictions of future system states (e.g., road flow rates) to complete its

Ad Hoc Distributed Simulation of Surface Transportation Systems

1053

simulation. If this information changes beyond certain parameterized limits, the simulator rolls back, and corrects its previously computed results. Based on this protocol, a prototype ad hoc distributed simulation system has been developed using both a custom-developed cellular automata traffic simulator as well as a commercial simulation tool called VISSIM, described next. Further details of this work are presented in [4].

3 Transportation Simulation Models An ad hoc transportation simulation based on a cellular automata model was developed for our initial investigations. The simulation consists of agents modeling vehicles, traffic signal controllers and traffic lights. The simulation operates in a timestep fashion. At every timestep, each agent decides what operation to perform next based on its own characteristics and the state of the system at the end of previous interval. Each vehicle agent includes characteristics such as origin, destination, maximum speed of the vehicle, and driver characteristics such as aggressiveness. Each vehicle has complete knowledge of the road topography around it. At the end of each timestep each vehicle agent makes decisions whether to move, stop, accelerate, or decelerate based on the following four rules: • • • •

Acceleration: if the velocity v of a vehicle is lower than max velocity and if the distance to the next car ahead is larger than v + gap, the speed is increased: v = v + acceleration. Slowing down (due to other cars): If a vehicle at site X sees the next vehicle at site X + j (with j < v), it reduces its speed to j: v = j Randomization: The velocity a vehicle, if greater than zero, is decreased with probability pdec,: v = v - deceleration. Car motion: each vehicle is advanced v cells.

Similarly, the traffic controller may also change its state, specifically the stop-go flag, at the end of each time step. We conducted experiments in the management of 20 client simulations using the cellular automata simulator covering a 10 intersection corridor. Under steady conditions, the distributed simulation client provided a system representation similar to that of a replicated trial experiment of the entire network, demonstrating that the ad hoc approach offers potential for accurately predicting future systems states Further, when a spike in the traffic flow was introduced into the network, the distributed clients again successfully modeled the traffic flows; however there was a short delay (up to approximately four minutes) in the upstream client transmitting the increased flows to the downstream clients. While the cellular automata simulation model allowed for the development of an understanding of the behavior of the ad hoc distributed approach it is desirable that future experimentation be conducted with a significantly more detailed, robust transportation simulation model. Further, the ability to adapt existing commercial simulation software for use in ad hoc distributed simulations would significantly improve the likelihood that the technology would be used. An initial investigation into implementing the ad hoc strategy using the off-the-shelf transportation simulation

1054

R.M. Fujimoto et al.

model VISSIM was conducted. VISSIM is widely used by private firms and public agencies for transportation system analysis. VISSIM is a discrete, stochastic, time step based microscopic simulation model developed to model urban traffic (freeway and arterial) and public transit operations. The model is capable of simulating a diverse set transportation network features, such as facility types, traffic control mechanism, vehicle and driver types, etc. Individual vehicles are modeled in VISSIM using a psycho-physical driver behavior model developed by Wiedemann. The underlying concept of the model is the assumption that a driver can be in one of four driving modes: free driving, approaching, following, or braking [5]. VISSIM version 4.10 was utilized for this investigation. Investigation into the use of VISSIM has proven very hopeful. Utilizing the VISSIM COM interface [6] it is possible to access many of the VISSIM object, methods, and properties necessary to implement simulation roll backs and automate VISSIM clients disbursed among workstations connected by a local area network. The initial investigation discussed in the section focuses on the ability to implement a roll back mechanism in VISSIM. Through the VISSIM COM interface it is possible at anytime ti during a simulation run to save the current state of the simulation model. At any later time tj, where j > i, it is possible to stop the simulation run, load the simulation state from time ti, update the simulation attributes indicated by the roll back (i.e. arrival rate on some link), and restart the simulation from time ti. An initial experiment was conducted using VISSIM demonstrating that the roll back algorithm was successfully implemented. This simulator was driven by traces of simulation-generated traffic flow data, and demonstrated to accurately predict future states (as represented from the trace data). Further details of these experiments are presented in [4].

4 Real-Time Dynamic Data Analysis The precision of the real-time data varies depending on the level of data aggregation. For example, minute-by-minute data are more precise than hourly average data. We examined the creation of an accurate estimate of the evolving state of a transportation system using real-time roadway data aggregated at various update intervals. Using the VISSIM model described in the previous section, a simulation of the transportation network in the vicinity of the Georgia Institute of Technology in Atlanta, Georgia was utilized to represent the real world, with flow data from the Georgia Tech model provided to a smaller simulation of two intersections within the network. The “real-world” flow data (i.e. flow data measured from the large scale Georgia Tech model) was aggregated in different intervals and used to dynamically drive the two-intersection model. The desire of this study was to explore how well the small-scale simulation model is able to reflect the real world scenario when fed data at different aggregation levels. This work explored congested and non-congested traffic demand at data five different aggregation time intervals: 1 sec., 10 sec., 30 sec., 60 sec., and 300 sec. For the non-congested conditions, there existed minor differences in the average value of the considered performance metrics (arrival time and delay) and the performance metric difference values for the tested scenarios. However, there was a

Ad Hoc Distributed Simulation of Surface Transportation Systems

1055

clear trend of increasing the root mean square error (RMSE) as the aggregation interval increased. Varying the upstream origin of the arrival streams also tended to influence the RMSE values more than the average values. From these results it can be seen that under non-congested conditions, the average of performance metrics alone are likely not good indicators of the ability of a data driven simulation to reflect real world operations. Measures of variation such as RMSE should also be considered. Unlike the non-congested conditions, the average values of the performance metrics in congested conditions were considerably different for the large real world simulation than the local simulation. The RMSE values also were significantly greater that those in the non-congested scenarios. There is also not a clear trend of the local simulation providing an improved reflection of the large simulation when given the smaller aggregation intervals. For the tested scenarios, the impact of congestion dominated the impact of a selected aggregation interval and upstream arrival pattern. The use of outflow constraints significantly improved the local model performance. These constraints helped capture the impact on the local simulation of congestion that occurs outside the local model boundaries. Where the boundaries of the congested region fall outside of the local simulation it becomes readily apparent that both the inflow and outflow parameters of the simulation must be dynamically driven to achieve a reasonable reflection of the real world conditions. In the deployment of in-vehicle simulations these experiments highlight the need for realistic field measured inflow and outflow data streams, which are not currently widely available. However, as sensor technologies have advanced, the amount of available real-time field data is increasing dramatically. The quantity of available real-time data is expected to continue to climb at an ever-increasing rate. This tidal wave of real-time data creates the possibility of a wide variety of data driven transportation applications. This effort has begun to examine some of the innumerable potential uses of this data. Further details of these results are described in [7].

5 Data Dissemination The proposed DDDAS system relies on mobile, wireless ad hoc networks to interconnect sensors, simulations, and servers. The experiments reported earlier were conducted over a local area network, and thus do not take into account the performance of the wireless network. However, the performance of the network infrastructure can play an important role in determining the overall effectiveness of the ad hoc distributed simulation approach. Thus, a separate effort has been devoted to examining the question of the impact of network performance on the results produced by the ad hoc distributed simulation. Clearly, a single hop wireless link between the vehicles and servers severely limits the wireless coverage area, motivating the use of a multihop network. However, standard routing protocols designed for end-to-end communication in mobile ad hoc networks cause each node to maintain the state of their neighboring nodes, for efficient routing decisions. Such a solution does not scale to vehicular networks,

1056

R.M. Fujimoto et al.

where the nodes are highly mobile, and route maintenance becomes expensive. Other routing protocols such as those discussed in [8,9] attempt to address this problem using flooding or optimistic forwarding techniques. Vehicle ad hoc network (VANET) data dissemination protocols are typically designed to address data sharing among the vehicles, and related applications such as multimedia, control data, etc. Distributed simulations, however, define a different data transfer model, and consequently, the solutions designed for data sharing applications may not perform well when transposed to the demands of simulations. Additionally, data transfers in VANETs is inherently unreliable, and drops and delays in message delivery can be highly dependent on factors such as traffic density and wireless activity, thus having a strong impact on the simulation itself. To address this challenge, a data dissemination framework for addressing the routing demands of a distributed simulation in the VANET environment has been developed. Our framework uses a combination of geographic routing and controlled flooding to deliver messages, with no organization enforced among the vehicles. The design parameters of the framework are currently under study in order to assess how this impacts the overall accuracy and reliability of simulation results.

6 Future Research Research in ad hoc distributed simulations and their application to the management of surface transportation systems is in its infancy, and many questions remain to be addressed. Can such a distributed simulation make sufficiently accurate predictions of future system states to be useful? Can they incorporate new information and revise projections more rapidly and/or effectively than conventional approaches, e.g., global, centralized simulations? Does networking the individual simulators result in better predictions than simply leaving the individual simulators as independent simulations that do not exchange future state information? How would an ad hoc distributed simulation be organized and operate, and what type of synchronization mechanism is required among them? Will the underlying networking infrastructure required to support this concept provide adequate performance? These are a few of the questions that remain to be answered. A broader question concerns the applicability of the ad hoc simulation approach to other domains. While our research is focused on transportation systems, one can imagine simulations like those discussed here might be used in other on-line simulation applications such as management of communication networks or other critical system infrastructures.

Acknowledgement The research described in this paper was supported under NSF Grant CNS-0540160, and is gratefully acknowledged.

Ad Hoc Distributed Simulation of Surface Transportation Systems

1057

References 1. Werner, J.: Details of the VII Initiative 'Work in Progress' Provided at Public Meeting(2005) 2. Bechler, M., Franz, W.J., Wolf, L.:Mobile Internet Access in FleetNet. In KiVS (2003) 3. Werner, J.: USDOT Outlines the New VII Initiative at the 2004 TRB Annual Meeting(2004) 4. Fujimoto, R. M., Hunter, M., Sirichoke, J. Palekar, M. Kim, H.-K., Suh, W. :Ad Hoc Distribtued Simulations. Principles of Advanced and Distributed Simulation( 2007) 5. PTV, VISSIM User Manual 4.10. 2005, PTV Planung Transport Verkehr AG: Karlsruhe, Germany(2005). 6. PTV, VISSIM COM, User Manual for the VISSIM COM Interface, VISSIM 4.10-12. 2006, PTV Planung Transport Verkehr AG: Karlsruhe, Germany(2006). 7. Hunter, M. P., Fujimoto, R. M. , Suh, W., Kim, H. K. :An Investigation of Real-Time Dynamic Data Driven Transportation Simulation. In: Winter Simulation Conference(2006) 8. Rahman, W. Olesinski, Gburzynski, P.: Controlled Flooding in wireless ad-hoc networks. In: Proc. of IWWAN(2004) 9. Wu, H., Fujimoto, R., Guensler, R., Hunter, M.: MDDV: A Mobility-Centric Data Dissemination Algorithm for Vehicul ar Networks. In: Proceedings of the VANET Conference(2004)

Cyberinfrastructure for Contamination Source Characterization in Water Distribution Systems Sarat Sreepathi1, Kumar Mahinthakumar1, Emily Zechman1, Ranji Ranjithan1, Downey Brill1, Xiaosong Ma1, and Gregor von Laszewski2 1

North Carolina State University, Raleigh, NC, USA {sarat_s, gmkumar, emzechma, ranji, brill, xma}@ncsu.edu 2 University of Chicago, Chicago, IL, USA [email protected]

Abstract. This paper describes a preliminary cyberinfrastructure for contaminant characterization in water distribution systems and its deployment on the grid. The cyberinfrastructure consists of the application, middleware and hardware resources. The application core consists of various optimization modules and a simulation module. This paper focuses on the development of specific middleware components of the cyberinfrastructure that enables efficient seamless execution of the application core in a grid environment. The components developed in this research include: (i) a coarse-grained parallel wrapper for the simulation module that includes additional features for persistent execution, (ii) a seamless job submission interface, and (iii) a graphical real time application monitoring tool. The performance of the cyberinfrastructure is evaluated on a local cluster and the TeraGrid.

1 Introduction Urban water distribution systems (WDSs) are vulnerable to accidental and intentional contamination incidents that could result in adverse human health and safety impacts. Identifying the source and extent of contamination (“source characterization problem”) is usually the first step in devising an appropriate response strategy in a contamination incident. This paper develops and tests a preliminary grid cyberinfrastructure for solving this problem as part of a larger multidisciplinary DDDAS [1] project that is developing algorithms and associated middleware tools leading to a full fledged cyberinfrastructure for threat management in WDSs [2]. The source characterization problem involves finding the contaminant source location (typically a node in a water distribution system) and its temporal mass loading history (“release history”) from observed concentrations at several sensor locations in the network. The release history includes start time of the contaminant release in the WDS, duration of release, and the contaminant mass loading during this time. Assuming that we have a “forward simulation model” that can simulate concentrations at various sensor locations in the WDS for given source characteristics, the source characterization problem, which is an “inverse problem”, can be formulated as an optimization problem with the goal of finding a source that can minimize the difference between the simulated and observed concentrations at the Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1058 – 1065, 2007. © Springer-Verlag Berlin Heidelberg 2007

Cyberinfrastructure for Contamination Source Characterization in WDSs

1059

sensor nodes. This approach is commonly termed “simulation-optimization” as the optimization algorithm drives a simulation model to solve the problem. Population based search methods such as evolutionary algorithms (EAs) are popular methods to solve this problem owing to their exploratory nature, ease of formulation, flexibility in handling different types of decision variables, and inherent parallelism. Despite their many advantages, EAs can be computationally intensive as they may require a large number of forward simulations to solve an inverse problem such as source characterization. As the larger DDDAS project relies heavily on EA based methods [3][4] for solving source characterization and sensor placement problems, an end-toend cyberinfrastructure is needed to couple the optimization engine to the simulation engine, launch the simulations seamlessly on the grid, and track the solution progress in real-time. Thus the primary objective of this research is to develop a prototype of this grid cyberinfrastructure. 1.1 Related Work Existing grid workflow systems such as CoG Kit [5] and Kepler [6] support preprocessing, post-processing, staging data/programs, and archival of results for a generic application on the grid. However, they do not provide custom solution to an application that requires frequent runtime interactions among its components (i.e., optimization and simulation components) at a finer granularity. They also require that the execution time of the core component (e.g., simulation) to be significantly large in order to amortize the overhead induced by the workflow system. In the WDS application, a single simulation instance can take anywhere from several milliseconds to several minutes depending on the network. If we need a system that can cater to any problem then it would not be feasible to use existing workflow systems (for smaller problems) without a significant performance penalty. To address this, a custom framework is developed in this research that can not only aggregate a large number of small computational tasks but also allows for persistent execution of these tasks during interactions with the optimization component in a batch environment. Existing workflow systems also do not provide support for real time monitoring of simulation-optimization runs from the perspective of a WDS application. Hence a real time visualization tool has been developed to inform the quantitative progress of the application to domain scientists.

2 Architecture The high level architecture of the preliminary cyberinfrastructure developed in this paper is shown in Fig 1. The optimization toolkit (which is a collection of optimization methods) interacts with the simulation component (parallel EPANET) through the middleware. The middleware also communicates with the grid resources for resource allocation and program execution. Typically the user invokes a script that launches the optimization toolkit and the visualization engine from a client workstation. The optimization toolkit then receives observed data from the sensors (or reads a file that has been synthetically generated) and then calls the middleware interface to invoke the simulation engine. The

1060

S. Sreepathi et al.

middleware interface then surveys the available resources and launches the simulation engine on the available resources through batch submission scripts or interactive commands. The middleware also transmits the sets of decision variables (e.g., variables representing source characteristics) generated by the optimization engine to the simulation engine via files. The simulation engine calculates the fitness values corresponding to the sets of decision variables sent by the optimization engine. These are then transmitted back to the optimization and visualization engines via files. The optimization engine processes this data and sends new sets of decision variables back to the simulation engine for the next iteration of the algorithm. The simulation engine maintains a persistent state until all the iterations are completed.

Sensor Data Parallel EPANET(MPI) EPANET-Driver

Optimization Toolkit

Middleware

Visualization

EPANET

EPANET

EPANET

Grid Resources

Fig. 1. Basic Architecture of the Cyberinfrastructure

The following subsections provide a brief description of the component developments involved in this cyberinfrastructure. Readers interested in additional details should refer to [7]. Subsequent discussions assume that the optimization engine uses EA based methods and the problem solved is source identification. However, the basic architecture is designed to handle any optimization method that relies on multiple simulation evaluations and any WDS simulation-optimization problem. 2.1 Simulation Model Enhancements The simulation engine, EPANET [8] is an extended period hydraulic and waterquality simulation toolkit developed at EPA. It is originally developed for the Windows platform and provides a C language library with a well defined API [8]. The original EPANET was ported to Linux environments and customized to solve simulation-optimization optimization problems by building a “wrapper” around it. For testing purposes, limited amount of customization was built into the wrapper to solve source identification problems. The wrapper uses a file-based communication system to interoperate with existing EA based optimization tools developed in diverse development platforms such as Java [3] and Matlab [4]. It also aggregates the EPANET simulations into a single parallel execution for multiple sets of source characteristics to amortize the startup costs and minimize redundant computation.

Cyberinfrastructure for Contamination Source Characterization in WDSs

1061

Parallelization The parallel version of the wrapper is developed using MPI and referred to as 'pepanet'. The middleware scripts are designed to invoke multiple ‘pepanet’ instantiations depending on resource availability. Within each MPI program, the master process reads the base EPANET input file (WDS network information, boundary conditions etc.) and an input file generated by the optimization toolkit that contains the source characteristics (i.e., decision variables). The master process then populates data structures for storing simulation parameters as well as the multiple sets of contamination source parameters via MPI calls. The contamination source parameter sets are then divided among all the processes equally ensuring static load balancing. Each process then simulates its assigned group of contamination sources successively. At the completion of assigned simulations, the master process collects results from all the member processes and writes it to an output file to be processed by the optimization toolkit. Persistency The evolutionary computing based optimization methods that are currently in use within the project exhibit the following behavior. The optimization method submits some evaluations to be computed (generation), waits for the results and then generates the next set of evaluations that need to be computed. If the simulation program were to be separately invoked every generation, it needs to wait in a batch environment for acquiring the requisite computational resources. But if the pepanet wrapper is made persistent, the program needs to wait in the queue just once when it is first started. Hence pepanet was enhanced to remain persistent across generations. In addition to amortizing the startup costs, the persistent wrapper significantly reduces the wait time in the job scheduling system. The persistent wrapper achieves this by eliminating some redundant computation across generations. One all evaluations are completed for a given generation (or evaluation set) the wrapper maintains a wait state by “polling periodically” for a sequence of input files whose filenames follow a pattern. The polling frequency can be tuned to improve performance (see section 3). This pattern for the input and output file names can be specified as command line arguments facilitating flexibility in placement of the files as well as standardization of the directory structure for easy archival. 2.2 Job Submission Middleware Consider the scenario when the optimization toolkit is running on a client workstation and the simulation code is running at a remote supercomputing center. Communication between the optimization and simulation programs is difficult due to the security restrictions placed at current supercomputing centers. The compute nodes on the supercomputers cannot be directly reached from an external network. The job submission interfaces also differ from site to site. In light of these obstacles, a middleware framework based on Python has been developed to facilitate the interaction between the optimization and simulation components and to appropriately allocate resources. The middleware component utilizes public key cryptography to authenticate to the remote supercomputing center from the client site. The middleware then transfers the file generated by the optimization component to the simulation component on the remote site using

1062

S. Sreepathi et al.

available file transfer protocols. It then waits for the computations to be completed at the remote sites and then fetches the output file back to the client site. This process is repeated until the termination signal is received from the client side (in the event of solution convergence or reaching iteration limit). The middleware script also polls for resource availability on the remote sites to allocate appropriate number of processors to minimize queue wait time by effectively utilizing the backfill window of the resource scheduler. When more than one supercomputer site is involved, the middleware divides the simulations proportionally among the sites based on processor availability and processor speed. A simple static allocation protocol is currently employed. 2.3 Real-Time Visualization The current visualization toolkit is geared toward the source identification problem and was developed with the following goals in mind: (i) Visualize the water distribution system map and the locations where the optimization method is currently searching for contamination sources, (ii) Visualize how the search is progressing from one stage (generation) of the optimization algorithm to the next to facilitate understanding of the convergence pattern of the optimization method. The tool has been developed using Python, Tkinter and Gnuplot. Fig 2 shows a screenshot of the visualization tool after the optimization method found the contamination source for an example problem instance. It shows the map of the water distribution system marking the “true” source (as it is known in the hypothetical test case) and the estimated source found by the optimization method. It also provides a plot comparing the release history of the true source and the estimated source. A multi-threaded implementation enables the user to interact with the tool’s graphical interface while the files are being processed in the backend.

Fig. 2. Visualization Tool Interface showing the Water Distribution System Map and Concentration profile for the true (red) and estimated (green) sources

3 Performance Results Performance results are obtained for solving a test source identification problem involving a single source. The sensor data is synthetically generated using a

Cyberinfrastructure for Contamination Source Characterization in WDSs

1063

T ime(sec)

hypothetical source. An evolutionary algorithm (EA) is used for solving this problem [3]. The following platforms are used for evaluating the performance of the cyberinfrastructure: (i) Neptune, a 11 node Opteron Cluster at NCSU consisting of 22 2.2 GHz AMD Opteron(248) processors and a GigE Interconnect, and (ii) Teragrid Linux Cluster at NCSA consisting of 887 1.3-1.5 GHz Intel Itanium 2 nodes and a Myrinet interconnect. Teragrid results are confined to simulations deployed on a single cluster but with the optimization component residing on a remote client site. Additional results including preliminary multi-cluster Teragrid results are available in [7]. The cyberinfrastructure has also been demonstrated on SURAgrid resources [9]. For timing purposes, the number of generations in the EA is fixed at 100 generations even though convergence is usually achieved for the test problem well before the 100th generation. The population size was varied from 600 to 6000 but the results in this paper are restricted to the larger population size. Timers were placed within the main launch script, middleware scripts, optimization toolkit and the simulation engine to quantify the total time, optimization time, simulation time, and overhead due to file movements. Additional timers were placed within the simulation engine to break down the time spent in waiting or “wait time” (includes the optimization time and all overheads) and time spent in calculations. Preliminary tests revealed that the waiting time within the simulation code was exceedingly high when the optimization toolkit and root process of the wrapper (simulation component) are on different nodes of the cluster. When both are placed on the same compute node, wait time reduced by a factor of more than 15 to acceptable values. Additional investigation indicated that these were due to file system issues. Further optimization of the polling frequency within the simulation engine improved wait time by an additional factor of 2. Once these optimizations were performed, the wait time predominantly consisted of optimization calculations, since the overhead is relatively negligible. Fig 3 illustrates the parallel performance of the application after these optimizations on the Neptune cluster up to 16 processors. As expected, the computation time within the simulation engine (parallel component) scales nearly linearly while the overhead and the optimization time (serial component) remain more or less constant.

Number of Processors

Fig. 3. Performance of the application up to 16 processors on Neptune cluster

For timing studies on the Teragrid, the optimization engine was launched on Neptune and the simulations on the Teragrid cluster. Again, preliminary performance

1064

S. Sreepathi et al.

T ime (sec)

tests indicated that the wait times were significantly impacted by file system issues. The performance improved by a factor of three when the application is moved from a NFS file system to a faster GPFS file system. Further improvement in wait time (about 15%) was achieved by consolidating the network communications within the middleware. Fig 4 shows the speedup behavior on the Teragrid cluster for up to 16 processors. While the computation time speeds up nearly linearly the wait time is much higher when compared to the timings of Neptune (Fig 3). Additional breakdown on the wait time indicated that 14% was spent in optimization and the remaining 86% in overheads including file transfer costs. The increased file transfer time is not unexpected as the transfers occur over a shared network between the geographically distributed optimization and simulation components. Comparison of simulation times with Neptune indicates that the Teragrid processors are also about three times slower. It is noted that the WDS problem solved is relatively small (about 20 seconds (Neptune) or 1 minute (TeraGrid) per 6000 simulations using 1 processor) thus making the overheads appear relatively high. As the overhead will remain approximately constant with increased problem sizes we expect the results to improve considerably for larger problems. Furthermore, several enhancements are planned to minimize overheads (see next section).

Number of Processors

Fig. 4. Performance of the application up to 16 processors on NCSA Teragrid Linux Cluster

4 Conclusions and Future Work An end-to-end solution for solving WDS contamination source characterization problems in grid environments has been developed. This involved coarse-grained parallelization of simulation module, middleware for seamless grid deployment and a visualization tool for real-time monitoring of application’s progress. Various performance optimizations such as improving processor placements, minimizing file system overheads, eliminating redundant computations, amortizing queue wait times, and multi-threading visualization were carried out to improve turnaround time. Even with these optimizations, the file movement overheads were significant when the client and server sites were geographically distributed as in the case of Teragrid. Several future improvements are planned including optimization algorithm changes that can allow for overlapping file movements and optimization calculations with simulation calculations, localizing optimization calculations on remote sites by partitioning techniques, and minimizing file transfer overhead using grid communication libraries.

Cyberinfrastructure for Contamination Source Characterization in WDSs

1065

Acknowledgments. This work is supported by National Science Foundation (NSF) under Grant Nos. CMS-0540316, ANI-0540076, CMS-0540289, CMS-0540177, and NMI-0330545. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.

References 1. Darema, F. Introduction to the ICCS 2006 Workshop on Dynamic data driven applications systems. Lecture Notes in Computer Science 3993, pages 375-383, 2006. 2. Mahinthakumar, G., G. von Laszewski, S. Ranjithan, E. D. Brill, J. Uber, K. W. Harrison, S. Sreepathi, and E. M. Zechman, An Adaptive Cyberinfrastructure for Threat Management in Urban Water Distribution Systems, Lecture Notes in Computer Science, Springer-Verlag, pp. 401-408, 2006.( International Conference on Computational Science (3) 2006: 401-408) 3. Zechman, E. M. and S. Ranjithan, “Evolutionary Computation-based Methods for Characterizing Contaminant Sources in a Water Distribution System,” Journal of Water Resources Planning and Management, (submitted) 4. Liu, L., E. M. Zechman, E. D. Brill, Jr., G. Mahinthakumar, S. Ranjithan, and J. Uber “Adaptive Contamination Source Identification in Water Distribution Systems Using an Evolutionary Algorithm-based Dynamic Optimization Procedure,” Water Distribution Systems Analysis Symposium, Cincinnati, OH, August 2006 5. CoG Kit Project Website, http://www.cogkit.org 6. Kepler Project Website, http://www.kepler-project.org 7. Sreepathi, S., Cyberinfrastructure for Contamination Source Characterization in Water Distribution Systems, Master's Thesis, North Carolina State University, December 2006. 8. Rossman, L.A.. The EPANET programmer’s toolkit. In Proceedings of Water Resources Planning and Management Division Annual Specialty Conference, ASCE, Tempe, AZ, 1999. 9. Sreepathi, S., Simulation-Optimization for Threat Management in Urban Water Systems, Demo, Fall 2006 Internet2 Meeting, December 2006.

Integrated Decision Algorithms for Auto-steered Electric Transmission System Asset Management James McCalley, Vasant Honavar, Sarah Ryan, William Meeker, Daji Qiao, Ron Roberts, Yuan Li, Jyotishman Pathak, Mujing Ye, and Yili Hong Iowa State University, Ames, IA 50011, US {jdm, honavar, smryan, wqmeeker, daji, rroberts, tua, jpathak, mye, hong}@iastate.edu

Abstract. Electric power transmission systems are comprised of a large number of physical assets, including transmission lines, power transformers, and circuit breakers, that are capital-intensive, highly distributed, and may fail. Managing these assets under resource constraints requires equipment health monitoring integrated with system level decision-making to optimize a number of various operational, maintenance, and investment-related objectives. Industry processes to these ends have evolved ad-hoc over the years, and no systematic structures exist to coordinate the various decision problems. In this paper, we describe our progress in building a prototype structure for this purpose together with a software-hardware environment to deploy and test it. We particularly focus on the decision algorithms and the Benders approach we have taken to solve them in an integrated fashion. Keywords: asset management, Benders decomposition, condition monitoring, decision algorithms, electric transmission, optimization, service-oriented architecture, software-hardware.

1 Introduction There are three interconnected electric power transmission grids in North America: the eastern grid, the western grid, and Texas. Within each grid, power supplied must equal power consumed at any instant of time; also, power flows in any one circuit depend on the topology and conditions throughout the network. This interdependency means that should any one element fail, repercussions are seen throughout the interconnection, affecting system economic and engineering performance. Overall management requires decision in regards to how to operate, how to maintain, and how to reinforce and expand the system, with objectives being risk minimization and social welfare maximization. The three decision problems share a common dependence on equipment health or propensity to fail; in addition, their solutions heavily influence future equipment health. As a result, they are coupled, and optimality requires solution as a single problem. However, because network size (number of nodes and branches) together with number of failure states is so large, such a problem, if solved using traditional optimization methods, is intractable. In addition, the three decision problems differ significantly in decision-horizon, with operational decisions Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1066 – 1073, 2007. © Springer-Verlag Berlin Heidelberg 2007

Integrated Decision Algorithms

1067

implemented within minutes to a week, maintenance decisions within weeks to a couple of years, and investment decisions within 2-10 years. Therefore, excepting the common dependence and effect on equipment health, the coupling is sequential, with solution to latter-stage problem depending on solution to former-stage problems. Because of this, the industry has solved them separately, with the coupling represented in a very approximate fashion via human communication mechanisms. We conjecture that resulting solutions are not only suboptimal, but they are not even very good solutions, a conjecture which motivates the work reported here. A previous paper [1] described an initial design for a hardware-software prototype capable of auto-steering information-decision cycles inherent to managing operations, maintenance, and planning of the high-voltage electric power transmission systems. Section 2 of this paper describes a refined version of this overall design together with progress in implementing it. Section 3 summarizes the various optimization problems, providing problem statements when solved individually. Section 4 provides a new formulation, based on Benders decomposition, for a subgroup of problems, an approach that we eventually intend to apply to the entire set. Section 5 concludes.

2 Overall Design and Recent Progress Figure 1 illustrates design of our prototype system for auto-steering informationdecision processes for electric transmission system asset management. This section overviews intended implementation and recent progress of the 5 different layers. Layer 1, The power system: The prototype centers on a continuously running model of the Iowa power system using network data provided by a local utility company using a commercial-grade operator training simulator (OTS). The OTS is provided by ArevaT&D (www.areva-td.com) and comprises the same energy management software system used by many major transmission control centers all over the world. The dataset on which this software system runs is the same dataset used by the utility company at their control center. This presents information security requirements that must be satisfied in our lab, since the data represents a critical national infrastructure. The work to implement this is intensive and is being supported under a cost-sharing arrangement between ArevaT&D, ISU, and the utility company. Layer 2, Condition sensors: Transformers are the most expensive single transmission asset, with typical costs between $1-5M. The utility company has over 600 of them some of which have well exceeded their ~40 year design life. All units undergo yearly dissolved gas-in-oil analysis (DGA) which, similar to a human blood test, provides information useful for problem diagnosis and prediction. We have obtained this data for all units and are using it to perform life prediction of the units. In addition, we are installing a real-time DGA monitor (www.kelman.co.uk) in one of the largest and oldest units and have been working on methods of transforming this data into health indicators that can be used in our decision algorithms. Layer 3, Data communication and integration: The transformer monitor is equipped with a cellular modem provided by Cannon (www.cannontech.com) that communicates the real-time data to our lab. A federated data integration system has been designed to provide efficient, dependable, and secure mechanisms for interfacing Layer 4 data transformation algorithms with the data resources [2].

1068

J. McCalley et al.

Layer 4, Data processing and transformation: The data available for equipment health prediction includes transformer monitoring and test data and weather/vegetation data which is useful for estimating probabilistic failure indices of transformers and overhead transmission lines [3].

Fig. 1. Prototype system design

Layer 5, Simulation and decision: This layer utilizes probabilistic failure indices from layer 4 together with short and long-term system forecasts to drive integrated stochastic simulation and decision models. Resulting operational policies, maintenance schedules, and facility expansion plans are implemented on the power system (as represented by the ArevaT&D simulator). The decision models are also used to discover the value of additional information. This valuation will be used to drive the deployment of new sensors and redeployment of existing sensors, impacting Layer 2. The integration of decision models is further described in Section 3. A service-oriented architecture (SOA) is used for this software system. This framework, PSAM-s, for Power System Asset Management employs a Web servicesbased SOA . The core of the framework is the PSAM-s engine comprised of multiple services responsible for enabling interaction between users and other services that offer specific functionality. These services are categorized into internal services (part of the PSAM-s engine) and external services. The internal services include submission, execution, brokering, monitoring, and storage. The external services include data provision and information processing. These services and their overall architecture are illustrated in Fig. 2; additional description is provided in [4].

Integrated Decision Algorithms

PSAMPSAM-s Framework Executes job request (workflow)

Handles job requests from users

Monitors execution of info-processing services registered with broker

1069

Establishes dynamic data links between info-processing & data providing services

Domain Specific Ontologies

Stores results of computation after job is executed; enables users to retrieve results

Equipment data: nameplate& oprtng, cndtn, maint. histories

Data analysis logic: Communicates with data-providing services in federated fashion. 14

Fig. 2. A Service-Oriented Architecture

3 Layer 5: Simulation, Decision and Information Valuation There are 6 basic risk-economy decision problems associated with power systems operation, maintenance, and planning, as illustrated in Table 1. The table illustrates the sequential coupling between the various problems in terms of information that is passed from one to the other. Information required to solve a problem is in its diagonal block and in the blocks left of that diagonal. Output information from solving a problem is below its diagonal block and represents input information for the lower-level problems. We briefly summarize each of these problems in what follows. Operations: There are three operational sub-problems [5, 6]. • Unit commitment (UC): Given an hourly total load forecast over the next day or week, identify the hourly sequence of generation commitments (which generators are interconnected to the grid?) to maximize welfare (minimize costs) subject to the requirement that load must be satisfied, and also subject to physical limits on each generator associated with supply capability, start-up, and shut-down times. • Optimal power flow (OPF): Given the unit commitment solution together with load requirements at each bus, and the network topology, determine the allocation of load to each generator and each generator’s voltage set point to maximize social welfare, subject to Kirchoff’s laws governing electricity behavior (encapsulated in a set of nonlinear algebraic “power flow” equations) together with constraints on branch flows, node voltages, and generator supply capability.

1070

J. McCalley et al. Table 1. Summary of Power System Risk-Economy Decision Problems Operations T=1-168 hrs

From

Maintenance T=1-5 yrs

Planning T=5-10 yrs

To Unit commit (UC)

Optimal power flow (OPF)

Security Shortterm Assessmnt maint (SA)

Longterm Investment maint planning

Planning

Maintenance

Operations

Unit commit Total load (UC) Optimal power flow (OPF) Security Assessment (SA)

Bus loads, Units committed topology

Units Operating Weather, committed condition failure data inst. cndtn data Operating Operating Maint effcts, Shortterm Units committed condition (risk) failure data, maint history cdt history, resources Operating Operating ST maint Cost of Longterm Units committed condition (risk) schedule, ST capital, maint history eqp deter rate failure data cdt history Operating Operating ST maint LT maint Cost of Investment Units committed condition (risk) schedule, ST schedule, capital, planning failure data, history eqp. deter LT eqp. rate deter rate cdt history

• Security assessment (SA): Given the operating condition (which is economically optimal), find the best tradeoff between minimizing supply costs and minimizing risk associated with potential failures in the network. Presently, the industry solves this problem by imposing hard constraints on risk (or conditions associated with risk), thus obtaining a single objective optimization problem, but it is amendable to multiobjective formulation. Maintenance: There are two maintenance-related sub-problems [7, 8]. • Short-term maintenance: Given a forecasted future operating sequence over an interval corresponding to a budget period (e.g., 1 year), together with a set of candidate maintenance tasks, select and schedule those maintenance tasks which most effectively reduce cumulative future risk, subject to resource (budget and labor) and scheduling constraints. • Long-term maintenance: For each class of components, given a future loading forecast, determine an inspection, maintenance, and replacement schedule to maximize its operational reliability and its residual life at minimum cost. This multiobjective problem is typically addressed with the single objective to maximize residual life subject to constraints on operational reliability and cost. Planning [9, 10]: Given a set of forecasted future load growths and corresponding operating scenarios, determine a network expansion plan that minimizes investment costs, energy production costs, and risk associated with potential failures in the network, subject to Kirchoff’s laws together with constraints on branch flows, node voltages, and generator physical supply capabilities. This problem is often solved by minimizing investment and production costs while imposing constraints on risk.

Integrated Decision Algorithms

1071

4 Benders Decomposition and Illustration Benders decomposition is an appropriate method for problems that are sequentially nested such that solution to latter-stage problems depends on solution to former-stage problems. Mixed integer problems can be posed in this way as can stochastic programming problems. The operational problem described in Section 3, consisting of the sequence of UC, OPF, and SA, is both. To illustrate concepts, consider: Min : z = c ( x ) + d ( y )

Problem P

s.t .

(1)

≥b

(1a)

E ( x) + F ( y) ≥ h

(1b)

A( x )

This problem can be represented as a two-stage decision problem [11]: Stage 1 (Master Problem): Decide on a feasible x* only considering (1a); Min : z = c ( x ) + α ' ( x )

(2)

≥b

(2a)

s.t .

A( x )

where α ' ( x) is a guess of stage 2 regarding stage 1 decision variable x, to be updated by stage 2.

Stage 2 (Subproblem): Decide on a feasible y* considering (1b) given x* from stage 1.

α ( x*) = Min d ( y )

(3)

F ( y ) ≥ h − E ( x*)

s.t.

(3a)

The partition theorem for mixed-integer programming problems [12] provides an optimality rule on which Benders decomposition is based. If we obtain optimal solution (z*, x*) in the first stage and then obtain optimal solution y* in the second stage, if c(x*)+d(y*)=z*, then (y*, x*) is the optimal solution for Problem P. The interaction between stages 1 and 2 is shown in Fig. 3. The procedure of Benders decomposition is a learning process (try-fail-tryinaccurate-try-…-solved). In the left part of Fig. 3, when the stage 1 problem is solved, the optimal value is then sent to stage 2. Stage 2 problem has two steps: 1) c(x*)

c(x*) +

x* STAGE 2

Min

STAGE 1 Constraint generation

Constraint generation

STAGE 1

+

x* STAGE 2

d(y*)

y*

Left: deterministic problem. Right: stochastic problem (P1 is probability of stage 2 scenario 1).

(Scenario 1)

P 1d(y 1*)

y1 * x* STAGE 2 (Scenario 2)

P 2d(y 2*)

y 2*

Fig. 3. Benders decomposition (modified from [11])

Min

1072

J. McCalley et al.

Check if the optimal solution from stage 1 is feasible. If it is not feasible, the stage 2 problem sends feasibility cuts back to stage 1 to be repeated under the additional constraints found in stage 2 to be in violation. 2) Check if the optimal guess of stage 2 from stage 1 is accurate enough. If it is not, a new estimation of α’(x) is sent to stage 1. If the optimal rule is met, the problem is solved. This process is easily expanded to the stochastic programming case, as illustrated in the right part of Fig. 3 where the optimal value from stage 1 is sent to stage 2, which has multiple scenarios. The process is exactly the same as the deterministic case, except that all constraint cuts and the optimal value from stage 2 are weighted by the probability of the scenario. A 6-bus test system, Fig. 4, is used to illustrate. Generators are located at buses 1, 2, 6; loads at buses 3, 4, 5. Possible contingencies considered include any failure of a single circuit. Detailed data for the system are provided in [5]. Figure 5 plots total cost of supply against time for a 24 hour period for two different scenarios: “average” uses contingency probabilities under normal weather, and “10*average” uses contingency probabilities under stormy weather. We observe in Fig. 5 the increased cost required to reduce the additional risk due to the stormy weather. Although the UC solution is the same in the two cases illustrated in Fig 5, it changes if the contingency probabilities are zero, an extreme situation which in fact corresponds to the way UC is solved in practice where UC and SA are solved separately. This is evidence that better solutions do in fact result when the different problems are solved together. G

L1

G

Bus 1

Bus 2 Circuit 1

3500 Average 10*Average

Bus 3 Circuit 3

3000

Circuit 4

Bus 4

Circuit 6

L2

Circuit 5

Bus 5

Bus 6

$

Circuit 2 2500

2000

Circuit 7 L3

G

1500 0

5

10

15

20

25

Hours

Fig. 4. 6-bus test system

Fig. 5. Effect of contingency

5 Conclusions Aging, capital intensive equipment comprise electric power grids; their availability largely determines the economic efficiency of today’s electricity markets on which a nation’s economic health depends; their failure results in increased energy cost, at best, and widespread blackouts, at worst. The balance between economy and reliability, or risk, is maintained via solution to a series of optimization problems in operations, maintenance, and planning, problems that traditionally are solved separately. Yet, these problems are coupled, and so solving them together necessarily improves on the composite solution. In this paper, we described a hardware-software system designed to address this issue, and we reported on our progress in developing this system,

Integrated Decision Algorithms

1073

including acquisition of a real-time transformer monitor and of a commercial-grade power system simulator together with corresponding data modeling the Iowa power system. We also designed a service-oriented architecture to guide development of our software system. Finally, we implemented an optimization framework based on Benders decomposition to efficiently solve our sequential series of decision problems. This framework is promising; we expect it to be an integral part of our power system asset management prototype as we continue to move forward in its development.

Acknowledgments The work described in this paper is funded by the National Science Foundation under grant NSF CNS0540293.

References 1. J. McCalley, V. Honavar, S. Ryan, W. Meeker, R. Roberts, D. Qiao and Y. Li, “Autosteered Information-Decision Processes for Electric System Asset Management,” in Computational Science - ICCS 2006, 6th International Conference, Reading, UK, May 2831, 2006, Proceedings, Part III, Series: Lecture Notes in Computer Science , Vol. 3993, V. Alexandrov, G. van Albada, P. Sloot, J. Dongarra, (Eds.), 2006. 2. J. Pathak, Y. Jiang, V. Honavar, J. McCalley, “Condition Data Aggregation with Application to Failure Rate Calculation of Power Transformers,” Proc. of the Hawaii International Conference on System Sciences, Jan 4-7, 2006, Poipu Kauai, Hawaii. 3. F. Xiao, J. McCalley, Y. Ou, J. Adams, S. Myers, “Contingency Probability Estimation Using Weather and Geographical Data for On-Line Security Assessment,” Proc. of the 9th Int. Conf. on Probabilistic Methods Applied to Pwr Sys, June 11-15, 2006, Stockholm, Sweden. 4. J. Pathak, Y. Li, V. Honavar, J. McCalley, “A Service-Oriented Architecture for Electric Power Transmission System Asset Management,” 2nd International Workshop on Engineering ServiceOriented Applications: Design and Composition, Dec. 4, 2007, Chicago, Ill. 5. Y. Li, J. McCalley, S. Ryan, “Risk-Based Unit Commitment,” to appear in Proc. of the 2007 IEEE PES General Meeting, June, 2007, Tampa Fl. 6. F. Xiao, J. McCalley, “Risk-Based Multi-Objective Optimization for Transmission Loading Relief Strategies,” to appear, Proc. of the 2007 IEEE PES Gen Meeting, June, 2007, Tampa Fl. 7. J. McCalley, V. Honavar, M. Kezunovic, C. Singh, Y. Jiang, J. Pathak, S. Natti, J. Panida, “Automated Integration of Condition Monitoring with an Optimized Maintenance Scheduler for Circuit Breakers and Power Transformers,” Final report to the Power Systems Engineering Research Center (PSerc), Dec., 2005. 8. Y. Jiang, J. McCalley, T. Van Voorhis, “Risk-based Maintenance Optimization for Transmission Equipment,” IEEE Trans on Pwr Sys, Vol 21, I 3, Aug. 2006, pp. 1191 – 1200. 9. J. McCalley, R. Kumar, O. Volij, V. Ajjarapu, H. Liu, L. Jin, W. Zhang, Models for Transmission Expansion Planning Based on Reconfigurable Capacitor Switching,” Chapter 3 in “Electric Power Networks, Efficiency, and Security,” John Wiley and Sons, 2006. 10. M. Ye, S. Ryan, and J. McCalley, “Transmission Expansion Planning with Transformer Replacement,” Proc. of 2007 Industrial Engr. Research Conf, Nashville, Tn, May 20-22, 2007. 11. S. Granville et al., “Mathematical decomposition techniques for power system expansion planning,” Report 2473-6 of the Electric Power Research Institute, February 1988. 12. J. Benders, “Partitioning procedures for solving mixed-variables programming problems,” Numerische Mathematik 4: 238–252, 1962.

DDDAS for Autonomic Interconnected Systems: The National Energy Infrastructure C. Hoffmann1 , E. Swain2 , Y. Xu2 , T. Downar2 , L. Tsoukalas2 , P. Top3 , M. Senel3 , M. Bell3 , E. Coyle3 , B. Loop5 , D. Aliprantis3 , O. Wasynczuk3 , and S. Meliopoulos4 1 2

Computer Sciences, Purdue University, West Lafayette, Indiana 47907, USA Nuclear Engineering, Purdue University, West Lafayette, Indiana 47907, USA 3 Electrical and computer engineering, Purdue University, West Lafayette, Indiana 47907, USA 4 Electrical and computer engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA 5 PC Krause and Associates, West Lafayette, Indiana 47906, USA

Abstract. The most critical element of the nation’s energy infrastructure is our electricity generation, transmission, and distribution system known as the “power grid.” Computer simulation is an effective tool that can be used to identify vulnerabilities and predict the system response for various contingencies. However, because the power grid is a very large-scale nonlinear system, such studies are presently conducted “open loop” using predicted loading conditions months in advance and, due to uncertainties in model parameters, the results do not provide grid operators with accurate “real time” information that can be used to avoid major blackouts such as were experienced on the East Coast in August of 2003. However, the paradigm of Dynamic Data-Driven Applications Systems (DDDAS) provides a fundamentally new framework to rethink the problem of power grid simulation. In DDDAS, simulations and field data become a symbiotic feedback control system and this is refreshingly different from conventional power grid simulation approaches in which data inputs are generally fixed when the simulation is launched. The objective of the research described herein was to utilize the paradigm of DDDAS to develop a marriage between sensing, visualization, and modelling for large-scale simulation with an immediate impact on the power grid. Our research has focused on methodological innovations and advances in sensor systems, mathematical algorithms, and power grid simulation, security, and visualization approaches necessary to achieve a meaningful large-scale real-time simulation that can have a significant impact on reducing the likelihood of major blackouts.

1

Introduction

The August 2003 blackout cascaded throughout a multi-state area of the U.S. and parts of Canada within a few minutes. Hindsight reveals the blackout was the result of the system operating too close to the point where synchronous Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1074–1082, 2007. c Springer-Verlag Berlin Heidelberg 2007 

DDDAS for Autonomic Interconnected Systems

1075

operation of the generators could be lost (transient stability limit). In this situation, the inability of operators and system controllers to quickly respond to unexpected anomalies in the system resulting in the blackout. An August, 2004 IEEE Spectrum article entitled “The Unruly Power Grid” [1] emphasizes the need for accurate system modelling and fast computation in order to provide operators and system controllers with timely on-line information regarding the dynamic security of the entire electric grid, thereby reducing the likelihood of future cascading system failures. One of the main difficulties in controlling complex, distributed systems such as the electric power grid is its enormous scale and complexity. Presently, transient stability studies are conducted open loop based upon predicted load profiles months in advance. As such, the results are only approximations of what might happen based upon the assumptions, load estimates, and parameters assumed in the computer studies. Distributed dynamic estimation based on dynamic GPSsynchronized data has the potential of alleviating the shortcomings of the present state of the art. The simulation of the evolution of the system state forward in time using dynamic and synchronized measurements of key system parameters appears to be within reach. Fast, accurate, data-driven computation can revolutionize operating procedures and open the door to new and challenging research in the operation and control of the electric power grid. There are several key areas that must be addressed to achieve the overall research objective: faster-than-real-time simulation, distributed dynamic sensing, the development of behavioral models for situations where physics-based models are impracticable, and visualization. Faster-than-real-time simulation alone is insufficient – the simulation must be accurate and track the true operating conditions (topology, parameters, and state) of the actual system. An integrated sensing network is essential to ensure that the initial state of the simulated system matches the present state of the actual system, and to continually monitor and, if necessary, adjust the parameters of the system model. Moreover, it has been recognized that the dynamic characteristic of loads can influence the voltage stability of the system. Due to their sheer number and the uncertainties involved, it does not appear possible to model distributed loads and generators, such as wind turbines and photovoltaic sources, using deterministic physics-based models. Neural networks may be used in cases where physics-based models do not exist or are too complex to participate in a faster-than-real-time simulation. Finally, even if the goal of an accurate faster-than-real time predictive simulation is achieved, the number of potential contingencies to be evaluated at any instant of time will be intractable. Therefore, methods of identifying vulnerabilities using visualization and the development of suitable stability metrics including heuristic approaches are sought. The research performed at Purdue University, the Georgia Institute of Technology, and PC Krause and Associates has addressed each of these critical areas. The purpose of this paper is to describe the research conducted heretofore and key results obtained to date.

1076

C. Hoffmann et al. G rid F re q u e n c y V a ria t io n s e ns o r 1 s e ns o r 2

f re q u e n c y (H z )

6 0.0 6

6 0.0 4

6 0.0 2

60

5 9.9 8

0

50 0

1 00 0

1 50 0

2 00 0 tim e (s )

2 5 00

3 0 00

35 00

40 00

Fig. 1. Grid frequency variations at two locations in the power grid separated by 650 mi. NTP and post-processing techniques were used to synchronize time stamps of the data to within 10 μs.

2

Distributed Dynamic Sensing

Presently, sufficient measurements are taken so that, with the aid of a state estimator, the voltage magnitudes, phase angles, and real and reactive power flows can be monitored and updated throughout the grid on a 30-90 second cycle. A current project, supported by the Department of Energy (DOE), has as its goal to instrument every major bus with a GPS-synchronized phasor measurement unit. While this would provide system operators with more up-to-date top-level information, a concurrent bottom-up approach can provide even more information, reduce uncertainty, and provide operators and researchers alike with more accurate models, an assessment of the true state of the system, and diagnostic information in the event of a major event such as a blackout. The large scale of the power grid means that many sensors in various locations will be required for monitoring purposes. The number of sensors required for effective monitoring means they must be inexpensive, while maintaining the required accuracy in measurements. A custom designed sensor board including data acquisition hardware and a time stamping system has been developed. This sensor uses the Network Time Protocol (NTP) and WWVB, a US-based time signal that, because of its long wavelength, propagates far into buildings (unlike GPS). This new board will provide the required accuracy in measurement and time stamping at a very reasonable cost. The goal is synchronization of sensors deployed throughout the grid to within 10 μs or better because a one degree error in a phase estimate corresponds to a 46-μs delay. Since November 2004, six proof-of-concept sensors were deployed in various locations in the U.S. These sensors have been in operation and have been collecting data since that time. The sensors communicate through the Internet, and regularly send the collected data to a central server. A significant amount of effort has gone into developing an automated system wherein the sensor data is automatically processed and archived in a manner that can be easily retrieved via a web interface. In the future, this will allow additional sensors to be added with only minimal effort. A typical example is shown in Fig. 1, which depicts

DDDAS for Autonomic Interconnected Systems

1077

Fig. 2. Cramer–Rao lower bounds on the variance of the error for estimates of the signal frequency based on 1200 samples per second (20 per period) of the 60 Hz power signal with additive white Gaussian noise

frequency estimates from two proof-of-concept sensors (one in West Lafayette, Indiana, and the other in Maurice, Iowa) over the period of an hour. The data shows the synchronization of the relative frequency changes in the electric grid even in geographically distant sensor locations. The technique used to estimate the frequency of the power signal is equivalent to the maximum likelihood estimate for large SNR’s, which is typically the case with our measurements. We have derived the Cramer–Rao lower bounds on the variance of the error of this (unbiased) estimator. The results are shown in Fig. 2.

3

Distributed Heterogeneous Simulation

Recent advancements in Distributed Heterogeneous Simulation (DHS) have enabled the simulation of the transient performance of power systems at a scope, level-of-detail, and speed that has been heretofore unachievable. DHS is a technology of interconnecting personal computers, workstations, and/or mainframes to simulate large-scale dynamic systems. This technology provides an increase in simulation speed that approaches the cube of the number of interconnected computers rather than the traditional linear increase. Moreover, with DHS, different simulation languages may be interconnected locally or remotely, translation to a common language is unnecessary, legacy code can be used directly, and intellectual property rights and proprietary designs are completely protected. DHS has been successfully applied to the analysis, design, and optimization of energy systems of ships and submarines [2], aircraft [3], land-based vehicles [4], the terrestrial power grid [5], and was recently featured in an article entitled “Distributed Simulation” in the November 2004 issue of Aerospace Engineering [6]. DHS appears to be an enabling technology for providing faster-than-real-time detailed simulations for automated on-line control to enhance survivability of large-scale systems during emergencies and to provide guidance to ensure safe

1078

C. Hoffmann et al.

and stable system reconfiguration. A DHS of the Indiana/Ohio terrestrial electric grid has been developed by PC Krause and Associates with cooperation from the Midwest Independent System Operator (MISO), which may be used as a real-life test bed for DDDAS research.

4

Visualization

The computational and sensory tasks generate data that must be scrutinized to assess the quality of simulation, of the models used to predict state, and to assess implications for the power grid going forward. We envision accurate visual representations of extrapolations of state evolution, damage assessment, and control action consequences to guide designers and operators to make timely and well-informed decisions. The overriding concern is to ensure a smooth functioning of the power grid, thus it is critical to provide operators with situation awareness that lets them monitor the state of the grid rapidly and develop contingency plans if necessary. An easily understood visual presentation of state variables, line outages, transients, phase angles and more, is a key element for designers and operators to be in a position to perform optimally. The Final Report on the August 14th Blackout in the United States and Canada by the U.S.–Canada Power System Outage Task Force [7] states clearly that the absence of accurate real-time visualization of the state of the electric grid at FirstEnergy (FE) was an important shortcoming that led to the blackout of August 2003. FE was simply in the dark and could not recognize the deterioration leading to the wide-spread power failure. Our mission, therefore, is to assist grid system operators to monitor, predict, and take corrective action. The most common representation of the transmission grid is the one-line diagram, with power flow indicated by a digital label for each arc of the graph. Recently, the diagrams have been animated [8], allowing insight into power flow. Flow has also been visualized using dynamically sized pie charts that grow superlinearly as a line approaches overload. The one-line diagram is impractical for visualizing complex power systems: buses overlap with each other and with lines to which they are not connected, and lines intersect. Although considerable research in information visualization has been devoted to graph drawing [9], contouring is a simpler and more effective method for power grids. Contouring is a classic visualization technique for continuous data, which can be applied to discrete data by interpolation. Techniques have been developed for contouring bus data (voltage, frequency, phase, local marginal prices), and of line data (load, power transfer distribution factors) [8, 10, 11]. Phase-angle data as measured directly via GPS-based sensors or semi-measured/calculated from existing sensors/state estimators may be used as an indicator of transient stability margin or via “energy” functions. We have developed tools for visualizing phase angle data that allow the operators to detect early and correct quickly conditions that can lead to outages [12]. To provide real-time assessment of power grid vulnerability, large amounts of measured data must be analysed rapidly and presented timely for corrective response. Prior work in visualization of power grid vulnerability handles only

DDDAS for Autonomic Interconnected Systems

1079

Fig. 3. Visualization of Midwestern power grid

the case of static security assessment; a power flow is run on all contingencies in a set and the numerical data is visualized with decorated one-line diagrams [13]. Strategically, it is an advantage to use preventative techniques that lead to fast apprehension of the visually presented data and thus to situation awareness. Displays created based on the theory of preattentive processing (from the scientific field of human cognitive psychology) can provide such a visualization environment [14]. These displays would use a combination of colors and glyphs to provide underlying information, as depicted in Fig. 3.

5

Neural Distributed Agents

Neural networks (NNs) provide a powerful means of simplifying simulations of subsystems that are difficult or impossible to simulate using deterministic physics-based models, such as the composite loads in the power grid. The ability of a neural distributed agent [15] to reduce the computational burden and to help achieve real-time performance has been demonstrated. Instead of directly computing the state of the coupled systems, the states of each subsystem are precomputed and functionalized to the state variables using NN methods. The state of the coupled system for any given condition is then constructed by interpolating the pre-calculated data. The approach used involved decomposition of the system into several subsystems and utilizing a NN to simulate the most computationally intensive subsystems. The NN produces pre-computed outcomes based on inputs to respective subsystems. The trained NN then interpolates to obtain a solution for each iteration step. Periodic updates of the NN subsystem solution will ensure that the overall solution remains consistent with the physics-based DHS. NN-based models further facilitate real-time simulation by reducing computational complexity of subsystem models. Similar concepts have been used for real-time simulation of the dispersion model of chemical/biological agents in a 3-D environment [16].

1080

C. Hoffmann et al.

This concept was demonstrated using a small closed electrical system consisting of a generator connected to transmission lines. The generator subsystem was created in Simulink [17] and provided the template for training the NN. The generator subsystem was then replaced by a NN configuration with the goal of reducing by at least two orders of magnitude the time required to simulate the computationally intensive physics-based generator equations used in the Simulink subsystem. The current phase of the project restricts the NN to simulating only steady-state electrical generator response. The generator subsystem physics model provided corresponding I/O pairs by varying parameters in the transmission lines. These pairs were then used to train the NN such that the subsystem output would be pre-computed prior to any simulations using the subsystem. More than 160 system variations were obtained that produced a stable steady-state system. To accurately train the network, the generator subsystem data obtained from the original model was split into a training data set and a testing data set. The training data set was used to create a feed-forward back propagation NN. The output of the NN consisted of six electrical current values. The electrical current consisted of the magnitude and phase angle for each of the three phases of the system. The NN input consisted of six components of the electrical voltage, similar to the components of the current. The trained networks were tested using the testing data set consisting of 20 data points that were used to compare the results of the completed NN with the results produced by the original generator subsystem model. The mean square error between the outputs of the physics and NN models were used to measure the validity of the NN training.

Fig. 4. Neural network test system for electrical generator

The generator NN was then integrated into Simulink as shown in Fig. 4. This system interacts with the generator load, which is connected through the connection ports on the right side of the figure. A Matlab script was created that allows the Simulink model to be called iteratively until a solution is reached. The neural network-based system showed very good agreement with the original Simulink model. Using a convergence tolerance of 0.0004 for the relative iteration change of the voltage values, the maximum relative error in the voltage values

DDDAS for Autonomic Interconnected Systems

1081

was 0.00056 and the maximum relative error in the current values was 0.0022. Execution time comparisons showed a dramatic time reduction when using the NN. The Simulink model required about 114,000 iterations and 27 minutes CPU time to simulate 30 seconds real time on a 531-MHz computer. The NN-based system was able to determine solutions for over half the cases in less than 15 seconds on a 2.16-GHz computer. All cases were completed in less than 120 seconds. Further reductions in the execution time are anticipated, as well as the ability to perform transient analysis.

6

Summary

The research conducted at Purdue University, the Georgia Institute of Technology, and PC Krause and Associates address the critical areas needed to achieve the overall goal of an accurate real-time predictive simulation of the terrestrial electric power grid: faster-than-real-time simulation, distributed dynamic sensing, the development of neural-network-based behavioral models for situations where physics-based models are impracticable, and visualization for situation awareness and identification of vulnerabilities. DDDAS provides a framework for the integration of the results from these coupled areas of research to form a powerful new tool for assessing and enhancing power grid security.

References 1. Fairley, P.: The unruly power grid. IEEE Spectrum (August 2004) 2. Lucas, C.E., Walters, E.A., Jatskevich, J., Wasynczuk, O., Lamm, P.T., Neeves, T.E.: Distributed heterogeneous simulation: A new paradigm for simulating integrated naval power systems. WSEAS/IASME Trans. (July 2004) 3. Lucas, C.E., Wasynczuk, O., Lamm, P.T.: Cross-platform distributed heterogeneous simulation of a MEA power system. In: Proc. SPIE Defense and Security Symposium. (March 2005) 4. PC Krause and Associates: Virtual prototyping vehicle electrical system, management design tool. Technical report, SBIR Phase I Contract W56HZV-04-C-0126 (September 2004) 5. Jatskevich, J., Wasynczuk, O., Mohd Noor, N., Walters, E.A., Lucas, C.E., Lamm, P.T.: Distributed simulation of electric power systems. In: Proc. 14th Power Systems Computation Conf., Sevilla, Spain (June 2002) 6. Graham, S., Wong, I., Chen, W., Lazarevic, A., Cleek, K., Walters, E., Lucas, C., Wasynczuk, O., Lamm, P.: Distributed simulation. Aerospace Engineering (November 2004) 24–27 7. U.S.–Canada Power System Outage Task Force: Final report on the August 14th blackout in the United States and Canada, https://reports.energy.gov. Technical report (April 2004) 8. Overbye, T.J., Weber, J.D.: New methods for the visualization of electric power system information. In: Proc. IEEE Symposium on Information Visualization. (2000) 9. Wong, N., Carpendale, S., Greenberg, S.: Edgelens: An interactive method for managing edge congestion in graphs. In: Proc. IEEE Symposium on Information Visualization. (2003)

1082

C. Hoffmann et al.

10. Anderson, M.D. et al: Advanced graphics zoom in on operations. IEEE Computer Applications in Power (2003) 11. Weber, J.D., Overbye, T.J.: Power system visualization through contour plots. In: Proc. North American Power Symposium, Laramie, WY (1997) 12. Solomon, A.: Visualization strategies for monitoring power system security. Master’s thesis, Purdue University, West Lafayette, IN (December 2005) 13. Mahadev, P., Christie, R.: Envisioning power system data: Vulnerability and severity representations for static security assessment. IEEE Trans. Power Systems (4) (November 1994) 14. Hoffmann, C., Kim, Y.: Visualization and animation of situation awareness http://www.cs.purdue.edu/homes/cmh/distribution/Army/Kim/overview.html. 15. Tsoukalas, L., Gao, R., Fieno, T., Wang, X.: Anticipatory regulation of complex power systems. In: Proc. European Workshop on Intelligent Forecasting, Diagnosis, and Control, Santorini, Greece (June 2001) 16. Boris, J.P. et al: CT-Analyst: Fast and accurate CBR emergency assessment. In: Proc. First Joint Conf. on Battle Management for Nuclear, Chemical, Biological and Radiological Defense, Williamsburg, VA (November 2002) 17. The Mathworks: Simulink. http://www.mathworks.com/products/simulink/

Implementing Virtual Buffer for Electric Power Grids Rong Gao and Lefteri H. Tsoukalas Applied Intelligent Systems Lab, School of Nuclear Enigineering, Purdue University, 400 Central Drive, West Lafaytte, IN, USA {gao, tsoukala}@ecn.purdue.edu

Abstract. The electric power grid is a vital network for every aspect of our life. The lack of buffer between generation and consumption makes the power grid unstable and fragile. While large scale power storage is not technically and economically feasible at present stage, we argue that a virtual buffer could be effectively implemented through a demand side management strategy built upon the concept of dynamic data driven paradigm. An intelligent scheduling module implemented inside a smart meter enables customers to access electricity safely and economically. The managed use of electricity acts effectively as a buffer to provide the much needed stability for the power grid. Pioneering efforts intending to implement these concepts have been conducted by groups such as the Consortium for the Intelligent Management of Electric Grid. Progresses have been made and reported in this paper. Keywords: Electric Power Grid, Artificial Intelligence, Dynamic Data Driven.

1 Introduction The electric power grid is a vital network for every aspect of our life. The grid is so complex that it experiences the same instability problem as other complex systems do [1]. The system can ran into an unstable point without notice until a large scale collapse spreads, such as the major northeast blackout of 2003. The fragile nature of the electric power grid, to a great extend, has to do with the fact that electricity can not be stored in the grid. Electricity is consumed when it is generated. The lack of cushion between generation and consumption constitutes a major unstable factor for the power system. On the other hands, it is helpful to study another example of complex system, the Internet. The Internet connects billions of users in the world. The degrees of connectivity and irregularity are enormous. Yet surprisingly high stability is achieved considering the openness provide by the Internet. Users have the maximum freedom to use the resources with little restriction, something that is hard to imagine for the electric power grid. The key behind this impressive performance is the protocols, a set of rules that very player agrees to. Protocols regulate the creation, transfer and use of the resources. The Internet protocols roots on the assumption that very one has equal access to the resources on the network. This principle of fairness is the guideline for resolving conflict originated from the competition of resources. The protocols are the reason why the Internet is able to maintain functional under malignant environment. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1083 – 1089, 2007. © Springer-Verlag Berlin Heidelberg 2007

1084

R. Gao and L.H. Tsoukalas

The protocols are feasible and enforceable because of one big assumption that the resource (information) can be stored somewhere in the network. This assumption is the cornerstone for many key concepts in the protocols, such as routing and conflict resolving. In the Internet, resources are generated, stored, and consumed. The lag between generation and consumption ranges from milliseconds to seconds or minutes depending on the traffic in the network. The delay has proven an important stabling factor for the network. Having had a successful case at hand, it is natural to ask whether we can duplicate the same kind of success in the electric power grid. As the initial attempts, we can try to develop protocols similar to the ones used in the Internet to regulate the generation, transportation, and consumption of the electricity. This is certainly unrealistic for the current power grid. However, as the discussion and research on smart power meter have been going for a while, we have the reason to believe that in the near future every household will have one of these intelligent device installed. The intelligent meter is connected to the network and is able to communicate with the outside world such as the electricity supplier. The meter will have the computing capability, making possible the implementation of a set of fairy complicated protocols. With these smart meters at our disposal, there is only one thing between us and the Internet-type stable electric power grid - we need the means to store electricity. There have been lots of efforts being spent on electricity storage. Some successes have been reported [2]. But the solution for large scale storage that can make a big impact to the power grid is still not technically and economically feasible. In this paper, we argue that through the utilization of information technologies, we can implement a virtual buffer between the generation and consumption without the need to physically store the electricity in the grid.

2 Virtual Buffer for Electric Power Grid As we know, the lack of buffer between generation and consumption is the source of problem for many complex systems such as the electric power grid. Even though we don't have a feasible way to physically store large amount of electricity in the network so far, we are able to mimic the effect of storage through a virtual buffer. A virtual buffer between generation and consumption can be created through demand side management strategy which is build upon the practices of dynamic data driven paradigms. With the emergence of intelligent meters, it is possible to dynamically schedule the use of electricity of every customer. This dynamic scheduling will create a sheet of virtual buffer for generation and consumption as we argue here. Under the new paradigm, the consumption of electricity of every customer is intelligently managed. Customers don’t power up their electricity-hungry machines at will. Rather, they make smart decisions after balancing the costs and benefits. For example, some non-urgent activities, such as laundry, can be scheduled to sometime during the day when electricity is abundant and cheap. The costs of the electricity are determined by the supply-to-demand ratio and the capacity of the network to transfer the resources. This managed use of resources is analogous to the access control widely used in the Internet. A buffer between generation and consumption is therefore created, virtually. No physic laws are broke. The electricity is still actually consumed

Implementing Virtual Buffer for Electric Power Grids

1085

when generated. However, from the customer point of view, with dynamic consumption scheduling the resources (electricity) are created and then stored somewhere in the power grid before they are used. The analogy is shown in Fig. 1. The virtual buffer can greatly increase the stability of the power grid.

Buffered Network Internet

Unbuffered Network Intelligent Power Grid

Virtual buffer

Fig. 1. Internet and Power Grid with Virtual Buffer

Dynamic scheduling has to be carried out by software agents or intelligent agents. The intelligent agents will act on behalf of their clients, making reasonable decision based the analysis of the situation. One of the most important analysis powers an agent has to possess is the anticipation capability. The intelligent agent needs to predict its client's future consumption pattern to make scheduling possible. In other words, load forecasting capability is the central piece for such a system. A pioneering effort, conducted by CIMEG, has recognized this need and advanced many techniques to make that possible.

1086

R. Gao and L.H. Tsoukalas

3 CIMEG and TELOS In 1999, EPRI and DOD funded the Consortium for the Intelligent Management of the Electric Power Grid (CIMEG) to develop intelligent approaches to defend power systems against potential threats [3]. CIMEG was led by Purdue University and included partners from The University of Tennessee, Fisk University, TVA and ComEd (now Exelon). CIMEG advanced an anticipatory control paradigm with which power systems can act proactively based on early perceptions of potential threats. It uses a bottom-up approach to circumvent the technical difficulty of defining the global health of power system at the top level, ash shown in Fig. 2. The concept of Local Area Grid (LAG) is extremely important in CIMEG. A LAG is a demand-based autonomous entity consisting of an appropriated mixture of different customers, charged with the necessary authority to maintain its own health by regulating or curtailing the power consumption of its members. Once all LAGs have achieved security, the whole grid, which is constructed by a number of LAGs in a hierarchical manner, achieves security as well. To pursue the health of a LAG, intelligent agents are used. Intelligent agents monitor every load within the LAG, forecasting the power consumption of each individual load and taking anticipatory actions to prevent potential cascade of faults. A prototypical system called the Transmission Entities with Learning-capabilities and Online Self-healing (TELOS) has been developed and implemented in the service area of Exelon-Chicago and Argonne National Laboratory [4]. Generators To other LAGs

To other LAGs

Transmission Generators

Sub-transmission

Customer-Side Agent Space

Distribution

Residential Customers

Medium-Large Customer

Large Customers Very Large Customers

Fig. 2. CIMEG’s picture of electric power grid

Implementing Virtual Buffer for Electric Power Grids

1087

CIMEG’s mission is to create a platform for managing the power grid intelligently. In CIMEG’s vision, customers play more active role than in current power system infrastructure. Lots of solicitation and negotiations are involved, as shown in Fig. 3. The customer, who is represented by an intelligent meter in Fig. 3, predicts its need for the electricity and place order in the market. The amount of the order is influenced by the market price of the electricity, which is further determined by the difference of the demand and supply and also the capacity of the network. Economic models with price elasticity are used in the process [5]. The active interaction between customers and suppliers create a virtual buffer between consumption and generation as discussed earlier.

Power Pred.

Power Flow

Ordering

Transmission/ Distribution Agent

Intelligent Meter Economi c Model

N

Y

Pricing Info Economi c Model

Security Check

Power Flow

N

Backup

N

Capacity Check

Y

Schedule

Y Generation Agent

Fig. 3. Interactions between customers and suppliers

One of the key assumptions taken by CIMEG is that the demand of customers can be predicted to a useful degree of accuracy. Among many existing prediction algorithms, neural networks outperform others because of their superb capabilities in handling non-linear behaviors [6]. TELOS achieves improved forecasting accuracy by performing load forecast calculations at the customer site. The customer is capable of tracking local weather conditions as well as the local load demand. TELOS forecasts are based on neurofuzzy algorithms, which are highly adaptive to changing conditions. Preliminary results show that simple feed forward neural networks are well suited to predicting load at the customer scale. Fig. 4 shows neural-network-based load prediction for a large commercial customer.

1088

R. Gao and L.H. Tsoukalas

Fig. 4. Neural-network-based load forecasting for actual commercial customer

error is 0.087 0.8 prediction original data

0.6 0.4 0.2 0 20

40

60

80

100

120

140

error is 0.03 0.8 final prediction 0.6 0.4 0.2 0 20

40

60

80

100

120

140

Fig. 5. The prediction results with (bottom) and without (upper) compensations

Implementing Virtual Buffer for Electric Power Grids

1089

Neural networks are well qualified for handling steady state cases, wherever typical behaviors have been seen and generalized. Some unexpected events such as favorite sport programs seriously cause enormous error in prediction. CIMEG has developed a fuzzy logic module to process such situation. This module, called PROTREND, quickly detects any deviation from the steady state and generates a compensation for the final prediction. The results in Fig. 5 show a consumption pattern over a winter break. PROTREND effectively compensates a large portion of errors and makes the final prediction results more reliable.

4 Conclusions and Future Work A regulation between generation and consumption is vital for the stability of a complex system with limited resources. An effective way for regulation is to create buffers where resources can stored when not needed and released when requested. We have shown that an effective form of buffer can be simulated with the intelligent management of the consumption when no means of storage is available. CIMEG has made some promising results, demonstrating the feasibility of intelligent management based on the anticipation of consumption. Accurate load forecasting is possible. However, CIMEG's demo was implemented on a very small system. Large scale demonstrations are necessary, both on simulation and theoretical analysis.

Acknowledgments This project is partly supported by the National Science Foundation under contract no 0540342-CNS.

References 1. Amin, M.:Toward Self-Healing Energy Infrastructure Systems. In: IEEE Compouter Applications in Power, Volume 14, Number 1, 220-28 (2001) 2. Ribeiro, P. F., et al: Energy Storage Systems for Advanced Power Application.In: Proceedings of the IEEE, Vol 89, Issue 12, p1744-1756,(2001) 3. CIMEG, Intelligent Management of the Power Grid: An Anticipatory, Multi-Agent, High Performance Computing Approach, EPRI, Palo Alto, CA, and Applied Intelligent Systems Lab, School of Nuclear Engineering, Purdue University, West Lafayette, IN, 2004 4. Gao, R., Tsoukalas, L. H.:Anticipatory Paradigm for Modern Power System Protection. In:ISAP, Lemnos, Greece (2003) 5. Gao, R., Tsoukalas, L. H.: Short-Term Elasticities via Intelligent Tools for Modern Power Systems,.In: MedPower’02, Athens, Greece(2002) 6. Tsoukalas, L., Uhrig, R. : Fuzzy and Neural Approaches in Engineering. New York: Wiley, (1997)

Enhanced Situational Awareness: Application of DDDAS Concepts to Emergency and Disaster Management* Gregory R. Madey1, Albert-László Barabási2, Nitesh V. Chawla1, Marta Gonzalez2, David Hachen3, Brett Lantz3, Alec Pawling1, Timothy Schoenharl1, Gábor Szabó2, Pu Wang2, and Ping Yan1 1 Department of Computer Science & Engineering University of Notre Dame, Notre Dame, IN 46556, USA {gmadey, nchawla, apawling, tschoenh, pyan}@nd.edu 2 Department of Physics University of Notre Dame, Notre Dame, IN. 46556, USA {alb, m.c.gonzalez, gabor.szabo, pwang2}@nd.edu 3 Department of Sociology University of Notre Dame, Notre Dame, IN 46556, USA {dhachen, blantz}@nd.edu

Abstract. We describe a prototype emergency and disaster information system designed and implemented using DDDAS concepts. The system is designed to use real-time cell phone calling data from a geographical region, including calling activity – who calls whom, call duration, services in use, and cell phone location information – to provide enhanced situational awareness for managers in emergency operations centers (EOCs) during disaster events. Powered-on cell phones maintain contact with one or more within-range cell towers so as to receive incoming calls. Thus, location data about all phones in an area are available, either directly from GPS equipped phones, or by cell tower, cell sector, distance from tower and triangulation methods. This permits the cell phones of a geographical region to serve as an ad hoc mobile sensor net, measuring the movement and calling patterns of the population. A prototype system, WIPER, serves as a test bed to research open DDDAS design issues, including dynamic validation of simulations, algorithms to interpret high volume data streams, ensembles of simulations, runtime execution, middleware services, and experimentation frameworks [1].

1 Introduction For disaster and emergency response managers to perform effectively during an event, some of their greatest needs include quality communication capabilities and high levels of situational awareness [2-4]. Reports from on-scene coordinators, first responders, public safety officials, the news media, and the affected population can *

The material presented in this paper is based in part upon work supported by the National Science Foundation, the DDDAS Program, under Grant No. CNS-050312.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1090 – 1097, 2007. © Springer-Verlag Berlin Heidelberg 2007

Enhanced Situational Awareness

1091

provide managers with point data about an emergency, but those on-scene reports are often inaccurate, conflicting and incomplete with gaps in geographical and temporal coverage. Additionally, those reports must be fused into a coherent picture of the entire affected area to enable emergency managers to effectively respond. The prototype Wireless Phone Based Emergency Response System (WIPER) is designed using the concepts of Dynamic Data Driven Application Systems (DDDAS) with the goal of enhancing situational awareness of disaster and emergency response managers [5, 6]. The subsequent sections review open research issues with the DDDAS concept and design and development features of the WIPER prototype that address some of those open issues. Those features include an analysis of the local and the global structure of a large mobile communication network, several algorithms for detecting anomalies in streaming data, agent-based simulations for classification and prediction, a distributed system design using web services, and the design and implementation of the WIPER DSS interface.

2 Dynamic Data Driven Application Systems Dynamic data driven application systems (DDDAS) were explored in detail in two NSF workshops: one in early 2000 [7] and one in 2006 [1]. The first workshop concluded that the DDDAS concept offered the potential of greatly improving the accuracy and efficiency of models and simulations. The workshop final report identified more research in the areas of 1) dynamic, data driven application technologies, 2) adaptive algorithms for injecting and steering real-time data into running simulations, and 3) systems software that supports applications in dynamic environments. At following conferences, initial research and applications exploring these research areas were reported [8]. A fourth area of research important to the DDDAS concept emerged, that of measurement systems; the dynamic steering of the data collection needed by the simulations may require improvements in measurement, observation and instrumentation methods. In 2004, Darema described the DDDAS concept as: Dynamic Data Driven Application Systems (DDDAS) entails the ability to incorporate additional data into an executing application - these data can be archival or collected on-line; and in reverse, the ability of applications to dynamically steer the measurement process. The paradigm offers the promise of improving modeling methods, and augmenting the analysis and prediction capabilities of application simulations and the effectiveness of measurement systems. This presents the potential to transform the way science and engineering are done, and induce a major impact in the way many functions in our society are conducted, such as manufacturing, commerce, hazard management, and medicine [8]. The second major NSF workshop on DDDAS (2006) highlighted progress to date, and research yet to be conducted on open issues in the DDDAS concept [1, 9]. The prototype WIPER system explores all four research areas relevant to the DDDAS concept [5, 6] and many of the open DDDAS research issues. Those open DDDAS research areas include: 1) Dynamic and continuous validation of models, algorithms, systems, and system of systems: WIPER uses an ensemble of agent-based simulations

1092

G.R. Madey et al.

to test hypothesis about the emergency event, dynamically validating those simulations against streaming data, 2) Dynamic data driven, on demand scaling and resolution – the WIPER simulations request detailed data from the data source, providing higher resolution where needed to support dynamic validation, 3) Data format: collections taken across different instrument types can range in format and units – the WIPER data source includes multiple types of data, including location and movement, cell call data, service type, e.g., voice, data, SMS, 4) System Software, especially runtime execution, and middleware service and computational infrastructure – WIPER is employing a novel application of web services (SOA), messaging, and asynchronous Javascript and XML (AJAX) to integrate heterogeneous services and user displays, 5) Mathematical and Statistical Algorithms, especially advances in computational simulation tools for modeling, and system and integration of related algorithms – WIPER includes new algorithms for monitoring streaming data [10, 11] and new insight from mobile phone call data that will be incorporated into planned algorithms for anomaly detection [12, 13].

3 Data The WIPER system uses both actual call and location data provided by a cellular carrier and synthetic data to simulate emergencies. The actual data is used to calibrate the generation of synthetic data and to help design the anomaly detection algorithms. An anomaly, a possible indication of an emergency, triggers the execution of an ensemble of simulations, each dynamically validated against new streaming data. All data is anonymized to protect privacy. During development, testing and evaluation, 800 700

Call Activities

600 500 400 300 200 100 0 0

5

10

15

20

25

Hour

Fig. 1. Actual mobile phone calling activity per hour from a small city (20,000 population) for a 24 hour period starting at midnight. The four data series are associated with 4 different cell towers (all cell sectors combined for each tower). This data is used to calibrate synthetic data and to validate anomaly detection algorithms.

Enhanced Situational Awareness

1093

the data is stored in a database with software modules streaming the data to simulate the real-time data streams the system would see if deployed. The data that the WIPER system uses is aggregate in nature, does not track individual cell phones by actual user ID, and does not include the content of phone calls or messages. An example of the call activity over a 24 hour period or a small city with 4 cell towers is shown in Fig. 1. The synthetic data is validated against such data.

4 Design The design and operation of the WIPER DDDAS prototype is shown schematically in Fig. 2. The system has five components: 1) Data Source, 2) Historical Data Storage, 3) Anomaly Detection and Alert System (DAS), 4) Simulation and Prediction System (SPS), and 5) Decision Support (DSS) with Web Interface. Additional details about the operation of each component can be found elsewhere [5, 6]. The Simulation and Prediction System is triggered by the detection of an anomaly and an alert signal from the DAS. An ensemble of agent-based simulations evaluates possible explanations for the anomaly and alert. The dynamically validated simulation is used to classify the event and to predict its evolution. This and other WIPER provided information is displayed on a web-based console to be used by emergency managers for enhance situational awareness as shown in Fig. 3.

Fig. 2. Architecture of the WIPER DDDAS system: Either synthetic data or actual data from the cell phone providers is used for training, real time anomaly detection and dynamic validation of the simulation and prediction system

1094

G.R. Madey et al.

Fig. 3. The WIPER DSS console is web based, using asynchronous Javascript and XML (AJAX). Emergency managers can view streaming call activity statistics from the area of an incident, maps, weather information, GIS visualizations, and animations derived from agentbased simulation used to predict the evolution of an incident.

5 Data, Algorithms and Analysis The DDDAS concept is characterized by dynamic data, the first two D’s in DDDAS. In the WIPER system the dynamic data is cell phone activity: temporal calling patterns, statistics, locations, movement, and the evolving social networks of callers. In order to improve the detection and classification of anomalies and to dynamically validate predictive simulations, the WIPER project has conducted to date, five different, but complementary investigations. Each contributes understanding and methods for anomaly detection on streaming cell phone activity data. 5.1 Structure and Tie Strengths in Mobile Communication Networks Cell phone calling activity databases provide detailed records of human communication patterns, offering novel avenues to map and explore the structure of social and communication networks potentially useful in anomaly detection. This investigation examined the communication patterns of several million of mobile phone users, allowing the simultaneous study of the local and the global structure of a society-wide communication network. We observed a coupling between interaction strengths and the network's local structure, with the counterintuitive consequence that social

Enhanced Situational Awareness

1095

networks are robust to the removal of the strong ties, but fall apart following a phase transition if the weak ties are removed [13]. 5.2 Anomaly Detection Using a Markov Modulated Poisson Process Cell phone call activity records the behavior of individuals, which reflects underlying human activities across time. Therefore, they appear to have multi-level periodicity, such as weekly, daily, hourly, etc. Simple stochastic models that rely on aggregate statistics are not able to differentiate between normal daily variations and legitimate anomalous (and potentially crisis) events. In this investigation we developed a framework for unsupervised learning using Markov modulated Poisson process (MMPP) to model the data sequence and use the posterior distribution to calculate the probability of the existence of anomalies over time [11]. 5.3 Mapping and Visualization The cell phone calling data currently includes user locations and activity at a cellsized level of resolution. The size of a wireless cell can vary widely and depends on many factors, but these can be generalized in a simple way using a Voronoi diagram. A Voronoi lattice is a tiling of polygons in the plane constructed in the following manner: Given a set of points P (in our case, a set of towers) construct a polygon around each point in P such that for all points in the polygon around p0, the point is closer to p0 than to any other point in P. Thus we can construct a tiling of a GIS space into cells around our towers. We build a 3D image based on the activity at the site of interest as shown in Fig. 4. This 3D view gives a good representation of the comparative activity levels in the cells [6, 14]. 5.4 Hybrid Clustering Algorithm for Outlier Detection on Streaming Data We developed a hybrid clustering algorithm that combines k-means clustering, the leader algorithm, and statistical process control. Our results indicate that the quality of the clusters produced by our algorithm are comparable to, or better than, those produced by the expectation maximization algorithm using sum squared error as an evaluation metric. We also compared the outlier set discovered by our algorithm with the outliers discovered using one nearest neighbor. While our clustering algorithm produced a number of significant false positive and false negatives, most of the outlier detected by our hybrid algorithm (with proper parameter settings) were in fact outliers. We believe that our approach has promise for clustering and outlier detection on streaming data in the WIPER Detection and Alert System [10]. 5.5 Quantitative Social Group Dynamics on a Large Scale Interactions between individuals result in complex community structures, capturing highly connected circles of friends, families, or professional cliques in social networks. Although most empirical studies have focused on snapshots of these communities, because of frequent changes in the activity and communication patterns of individuals, the associated social and communication networks are subject to constant evolution. Our knowledge of the mechanisms governing the underlying community

1096

G.R. Madey et al.

dynamics is limited, but is essential for exploiting the real time streaming cell phone data for emergency management. We have developed a new algorithm based on a clique percolation technique, that allows, for the first time, to investigate in detail the time dependence of overlapping communities on a large scale and to uncover basic relationships of the statistical features of community evolution. The focus of this investigation on the networks formed by the calls between mobile phone users, observing that these communities are subject to a number of elementary evolutionary steps ranging from community formation to breakup and merging, representing new dimensions in their quantitative interpretation. We found that large groups persist longer if they are capable of dynamically altering their membership, suggesting that an ability to change the composition results in better adaptability and a longer lifetime for social groups. Remarkably, the behavior of small groups displays the opposite, the condition for stability being that their composition remains unchanged [12].

Fig. 4. A 3D view of activity in mobile phone cells. Each polygon represents the spatial area serviced by one tower. Cell color and height are proportional to the number of active cell phone users in that cell for a unit of time.

6 Summary WIPER is designed for real-time monitoring of normal social and geographical communication and activity patterns of millions of cell phone users, recognizing unusual human agglomerations, potential emergencies and traffic jams. WIPER uses streams

Enhanced Situational Awareness

1097

of high-resolution data in the physical vicinity of a communication or traffic anomaly, and dynamically injects them into agent-based simulation systems to classify and predict the unfolding of the emergency in real time. The agent-based simulation systems dynamically steer local data collection in the vicinity of the anomaly. Distributed data collection, monitoring, analysis, simulation and decision support modules are integrated to generate traffic forecasts and emergency alerts for engineering, public safety and emergency response personnel for improved situational awareness.

References [1] NSF, "DDDAS Workshop Report," http://www.dddas.org/nsf-workshop2006/-wkshp_ report.pdf, 2006. [2] J. Harrald and T. Jefferson, "Shared Situational Awareness in Emergency Management Mitigation and Response," in Proceedings of the 40th Annual Hawaii International Conference on Systems Sciences: Computer Society Press, 2007. [3] Naval Aviation Schools Command, "Situational Awareness," http://wwwnt.cnet.navy. mil/crm/crm/stand_mat/seven_skills/SA.asp, 2007. [4] R. B. Dilmanghani, B. S. Manoj, and R. R. Rao, "Emergency Communication Challenges and Privacy," in Proceedings of the 3rd International ISCRAM Conference, B. Van de Walle and M. Turoff, Eds. Newark, NJ, 2006. [5] G. Madey, G. Szábo, and A.-L. Barabási, "WIPER: The integrated wireless phone based emergency response system," Proceedings of the International Conference on Computational Science, Lecture Notes in Computer Science, vol. 3993, pp. 417-424, 2006. [6] T. Schoenharl, R. Bravo, and G. Madey, "WIPER: Leveraging the Cell Phone Network for Emergency Response," International Journal of Intelligent Control and Systems, (forthcoming) 2007. [7] NSF, "Workshop on Dynamic Data Driven Application Systems," www.cise.nsf.gov/ dddas, 2000. [8] F. Darema, "Dynamic Data Driven Application Systems: A New Paradigm for Application Simulations and Measurements," in ICCS'04, Krakow, Poland, 2004. [9] C. C. Douglas, "DDDAS: Virtual Proceedings," http://www.dddas.org/virtual_proceeings. html, 2006. [10] A. Pawling, N. V. Chawla, and G. Madey, "Anomaly Detection in a Mobile Communication Network," Proceedings of the NAACSOS, 2006. [11] Y. Ping, T. Schoenharl, A. Pawling, and G. Madey, "Anomaly detection in the WIPER system using a Markov modulated Poisson distribution," Working Paper, Notre Dame, IN: Computer Science & Engineering, University of Notre Dame, 2007. [12] G. Palla, A.-L. Barabási, and T. Viscsek, "Quantitative social group dynamics on a large scale," Nature (forthcoming), 2007. [13] J.-P. Onnela, J. Saramäki, J. Hyvönen, G. Szabó, D. Lazer, K. Kaski, J. Kertész, and A.L. Barabási, "Structure and tie strengths in mobile communication networks," PNAS (forthcoming), 2007. [14] T. Schoenharl, G. Madey, G. Szabó, and A.-L. Barabási, "WIPER: A Multi-Agent System for Emergency Response," in Proceedings of the 3rd International ISCRAM Conference, B. Van de Walle and M. Turoff, Eds. Newark, NJ, 2006.

AIMSS: An Architecture for Data Driven Simulations in the Social Sciences Catriona Kennedy1, Georgios Theodoropoulos1, Volker Sorge1 , Edward Ferrari2 , Peter Lee2 , and Chris Skelcher2 1

School of Computer Science, University of Birmingham, UK 2 School of Public Policy, University of Birmingham, UK C.M.Kennedy,[email protected]

Abstract. This paper presents a prototype implementation of an intelligent assistance architecture for data-driven simulation specialising in qualitative data in the social sciences. The assistant architecture semi-automates an iterative sequence in which an initial simulation is interpreted and compared with real-world observations. The simulation is then adapted so that it more closely fits the observations, while at the same time the data collection may be adjusted to reduce uncertainty. For our prototype, we have developed a simplified agent-based simulation as part of a social science case study involving decisions about housing. Real-world data on the behaviour of actual households is also available. The automation of the data-driven modelling process requires content interpretation of both the simulation and the corresponding real-world data. The paper discusses the use of Association Rule Mining to produce general logical statements about the simulation and data content and the applicability of logical consistency checking to detect observations that refute the simulation predictions. Keywords: Architecture,Data Driven Simulations, Social Sciences.

1 Introduction: Intelligent Assistance for Model Development In earlier work[1] we proposed a conceptual architecture for the intelligent management of a data driven simulation system. In that architecture, a software “assistant” agent should compare simulation predictions with data content and adapt the simulation as necessary. Similarly, it should adjust the data collection depending on simulation predictions. In this paper, we present a proof-of-concept prototype that is being developed as part of the AIMSS project1 (Adaptive Intelligent Model-building for the Social Sciences). This is an exploratory implementation of the conceptual architecture. A key issue the AIMSS project is trying to address is “evidence based model development”: this can be understood as an iterative process involving the following stages: 1. Formulate initial model and run simulation; 2. Once the simulation has stabilised, inspect it visually and determine whether it makes interesting predictions which need to be tested; 1

http://www.cs.bham.ac.uk/research/projects/aimss/

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1098–1105, 2007. c Springer-Verlag Berlin Heidelberg 2007 

AIMSS: An Architecture for Data Driven Simulations in the Social Sciences

1099

3. Collect the relevant data and analyse it; 4. Determine if the simulation predictions are supported by the data; 5. If the data does not support the predictions, determine whether the model should be revised. Experiment with variations of the original simulation and return to Step 2. The goal of the AIMSS project is to investigate the role of DDDAS in the automation of this process for the social sciences. The project is focusing on qualitative data and agent-based models.

Real world

1 real world data

2

high level description

3

Simulation

1 simulation data

2

high level description

3 inconsistent?

Adapt simulation

Fig. 1. Data driven adaptation of a simulation

A schematic diagram of the AIMSS concept of data-driven adaptation is shown in Figure 1. We can think of the simulation as running in parallel with events in the real world, although in a social science scenario, there are two important differences: 1. The simulation does not physically run in parallel with the real world. Instead, the real world data is usually historical. However, the data input could be reconstructed to behave like a stream of data being collected in parallel). 2. The simulation is abstract and does not correspond to a particular observed system. This means that the variable values read from the data cannot be directly absorbed into the simulation as might be typical for a physical model. Instead, the simulation represents a sequence of events in a typical observed system. The data may be collected from multiple real world instances of the general class of systems represented by the simulation. (For example, the simulation could be about a typical supermarket, while the data is collected from multiple real supermarkets). The figure can be divided into two processes: the first is a process of interpreting both the simulation predictions and the data content and determining whether they are consistent (arrows 1, 2 and 3 in the figure). The second involves adapting the simulation in

1100

C. Kennedy et al.

the event of an inconsistency. In the current version of the prototype we have focused on the first process. This requires the following automated capabilities: – Interpretation of simulation states (at regular intervals or on demand): due to the abstract and qualitative nature of the simulation, this is not just about reading the current variable values, but about generating a high level description summarising patterns or trends. – Interpretation of real world data: the same methods are required as for the simulation interpretation, except that the data is often more detailed and is usually noisy. Therefore pre-processing is required, which often involves the integration of data from multiple sources and the generation of higher level datasets that correspond to the simulation events. – Consistency checking to determine whether the simulation states mostly agree with descriptions of data content. – Re-direction and focusing of data collection in response to evaluation of simulation states or uncertainty in the data comparison. It is expected that the simulation and data interpretation will be complemented by human visualisation and similarly the consistency/compatibility checking may be overriden by the “common sense” judgements of a user. Visualisation and compact summarisation of the data are important technologies here.

2 Social Science Case Study As an example case study, we are modelling agents in a housing scenario, focusing on the circumstances and needs of those moving to the social rented sector, and emphasising qualitative measures such as an agent’s perception of whether its needs are met. We have implemented the agent-based simulation using RePast2 . The environment for the agents is an abstract “housing space” that may be divided into subspaces. One example scenario is where the space is divided into 4 “regions” (R1-4): R1: expensive, small city centre apartments; R2: inexpensive cramped city towerblocks in a high crime area; R3: Modest suburb; R4: Wealthy suburb (large expensive houses with large gardens). At initialisation, homes are allocated randomly to regions with largest number in inner city and city centre. Households are allocated randomly to regions initially with varying densities. A household is represented by a single agent, even if it contains more than one member. Precise densities and other attributes of each region (such as crime level etc.) can be specified as parameters. The simulation is a sequence of steps in which agents decide whether they want to move based on a prioritised sequence of rules. These rules are simplified assumptions about decisions to move. Each rule has the form: if (condition i not satisfied) then look for homes satisfying conditions 1, .., i where i is the position in a priority list which is indexed from 1 to n. For example, if condition 1 is “affordability”, this is the first condition to be checked. If the current 2

http://repast.sourceforge.net/

AIMSS: An Architecture for Data Driven Simulations in the Social Sciences

1101

home is not affordable, the agent must move immediately and will only consider affordability when selecting other homes (as it is under pressure to move and has limited choice). The agent will only consider conditions further down the list if the conditions earlier in the list are already satsified by its current home. For example, if the agent is relatively wealthy and its current housing is good, it may consider the pollution level of the neighbourhood as being too high. Before moving into a new home, it will take into account the pollution level of the new area, along with all other conditions that were satisfied in the previous home (i.e. it must be affordable etc.). We have used a default scenario where the conditions are prioritised in the following order: “affordability”, “crime level”, “living space”, “condition of home”, “services in neighbourhood” and “pollution level”. Those agents that find available homes move into them, but only a limited number are vacant (depending on selected parameters). An agent becomes “unhappy” if it cannot move when it wants to (e.g. because its income is too low). Clearly, the above scenario is extremely simplified. The decision rules do not depend on the actions of neighbours. Since this is a proof-of-concept about intelligent management of a simulation, the actual simulation model itself is a minor part of the prototype. According to the incremental prototyping methodology, we expect that this can be gradually scaled up by successively adding more realistic simulations. More details on the simulation are in [2]. 2.1 Data Sources For the feasibility study, we are using a database of moves into the social rented sector for the whole of the UK for one year. This is known as CORE (Continuous Recording) dataset. Each CORE record include fields such as household details (age, sex, economic status of each person, total income), new tenancy (type of property, number of rooms, location), previous location and stated reason for move (affordability, overcrowding etc). The simulation is a sequence of moves from one house to another for a typical housing scenario over a period of time measured in “cycles”. The CORE data contains actual moves that were recorded in a particular area (England).

3 Interpretation and Consistency Checking The first process of Figure 1 is the generation of datasets from both the simulation and the real world. To define the structure of the data, an ontology is required to specify the entities and attributes in the simulation, and to define the state changes. There are actually two components required to define a simulation: 1. Static entities and the relations of the model. For example, households and homes exist and a household can move from one region to another; a household has a set of needs that must be satisfied; 2. Dynamic behaviour: the decision rules for the agent as well as probabilistic rules for dynamic changes in the environment and household status (ageing, having children, changes in income etc.) The way in which these entities are initialised should also be stated as part of the model, as this requires domain knowledge (e.g. initial densities of population and houses etc.) For more detailed models, this becomes increasingly non-trivial, see e.g. [3]).

1102

C. Kennedy et al.

In the AIMSS prototype, both these components are specified in XML. Later we will consider the use of OWL3 . The XML specification also includes the agent rules. These specifications are machine-readable and may potentially be modified autonomously. We are building on existing work on this area [4]. The entities and attributes are used to define the structure of data to be sampled from the simulation as well as the structure of a high level dataset to be derived from the pre-processing of the raw data. At the end of this process, we have two datasets, one records a sequence of simulated house moves, the other contains a sequence of actual moves. 3.1 Data Mining: Recognising General Patterns The second stage in Figure 1 is the generation of high level descriptions. These are general statements about the developments in the simulation and in the real world. For this purpose, we are investigating data mining tools. We have done some initial experimentation with Association Rule Mining using the Apriori algorithm [5], which is available in the WEKA Machine Learning package [6]. Association rules are a set of “if ... then” statements showing frequently occuring associations between combinations of “attribute = value” pairs. This algorithm is suited to large databases containing qualitative data which is often produced in social science research. Furthermore, it is “unsupervised” in the sense that predefined classes are not given. This allows the discovery of unexpected relationships. An association rule produced by Apriori has the following form: if (a1 and a2 and ... and an ) s1 then (c1 and c2 and .. cm ) s2 conf(c) where a1 , ...an are antecedents and c1 , ..., cm are consequents of the rule. Both antecedents and consequents have the form “attribute = value”. s1 and s2 are known as the support values and c is the confidence. The support value s1 is the number of occurrences (records) in the dataset containing all the antecedents on the left side. s2 is the number of occurrences of both the right and left sides together. Only those collections of items with a specified minimum support are considered as candidates for construction of association rules. The confidence is s2 /s1 . It is effectively the accuracy of the rule in predicting the consequences, given the antecedents. An example minimum confidence may be 0.9. The higher the support and confidence of a rule, the more it represents a regular pattern in the dataset. If these measures are relatively low, then any inconsistency would be less “strong” than it would be for rules with high confidence and high support. The values of attributes are mutually exclusive. They are either strings or nominal labels for discrete intervals in the case of numeric data. The following are some example rules that were mined from the simulation data using the agent rules above and environmental parameters guided by domain specialists S1: if (incomeLevel=1 and moveReason=affordability) 283 then newHomeCost=1 283 conf(1) 3

Ontology Web Language: http://www.w3.org/2004/OWL/

AIMSS: An Architecture for Data Driven Simulations in the Social Sciences

1103

This specifies that if the income level is in the lowest bracket and the reason for moving was affordability then the rent to be paid for the new home is in the lowest bracket. The following is an example from the CORE data: D1: if (moveReason=affordability and incomeLevel=1) 102 then newHomeCost=2 98 conf(0.96)

This has a similar form to S1 above, except that the new home cost is in the second lowest bracket instead of the lowest. 3.2 Consistency-Checking Assuming that the CORE data is “typical” if sampled for a minimum time period (e.g. a year), the simulation can also be sampled for a minimum number of cycles beginning after a stabilisation period. The simulation-generated rule above is an example prediction. To test it, we can apply consistency checking to see if there is a rule that was discovered from the data that contradicts it. This would indicate that the available data does not support the current model. Some existing work on postprocessing of association rules includes contradiction checking. For example, [7] uses an “unexpectedness” definition of a rule, given previous beliefs. These methods may be applied to an AIMSS type architecture, where the “beliefs” are the predictions of a simulation. Efficient algorithms for general consistency checking are available, e.g. [8]. We are currently investigating the application of such algorithms to our work and have so far detected simple inconistencies of the type between S1 and D1 above.

4 Towards Dynamic Reconfiguration and Adaptation Work is ongoing to develop mechanisms to dynamically adjust the data collection and the simulation. Data mining often has to be fine-tuned so that the analysis is focused on the most relevant attributes. The rules generated from the simulation should contain useful predictions to be tested and the rules generated from the data have to make statements about the same entities mentioned in the prediction. Data mining parameters may be adjusted, e.g. by selecting attributes associated with predicted negative or positive outcomes. The consistency checking may still be inconclusive because there is insufficient data to support or refute the prediction. In this case the ontology should contain pointers to additional data sources, and these may be suggested to the user before data access is attempted. The dynamic adjustment of data collection from the simulation is also important so that focusing on particular events is possible (as is the case for the real world). Currently the data generated from the simulation is limited and only includes house moves that have actually taken place. This can be extended so that data can be sampled from the simulation which represents different viewpoints (e.g. it may be a series of spatial snapshots or it focus on the dynamic changes in the environment instead of actions of agents).

1104

C. Kennedy et al.

4.1 Adaptation and Model Revision The ontology and behaviour model may be adapted, since they are represented in a machine-readable and modifiable form. Possible forms of adaptation include the following: – Modify the initial values of parameters (such as e.g. initial density of homes in a particular kind of region) or the probabilities used to determine the initial values or to determine when and how they should change. – Add new attributes or extend the range of values for existing attributes as a result of machine learning applied to the raw data. – Modify agent behaviour rules or add new ones; – Modify the order of execution of behaviour rules. Note that behaviour rules are intended to give a causal explanation, while association rules merely show correlations. Furthermore, association rules may represent complex emergent properties of simple behaviour rules. Populations of strings of behaviour rules may be subjected to an evolutionary algorithm (such as genetic algorithms [9]) to evolve a simulation that is most consistent with the reality in terms of behaviour. Behaviour models that are most “fit” can be regarded as good explanations of the observed data. However, domain experts would have to interact with the system to filter out unlikely behaviours that still fit the available data. 4.2 Limitations of the Current Approach One limitation of the current prototype is that the pre-processing of the raw data is too much determined by artificial boundaries. For example, Association Rule Mining requires that numeric values are first divided into discrete intervals (e.g. “high”, “medium”, “low” for income and house prices). The problem of artificial divisions can be addressed by the use of clustering [10] to generate more natural classes, which can then be used as discrete attribute values for an Association Rule miner. Conceptual Clustering [11] addresses the need for clusters to relate to existing concepts. Instead of just relying on one method, a combination of different pattern recognition and machine learning methods should be applied to the different datasets. Another limitation of the approach we have taken is that the model-building process is determined by a single interpretation of the data (i.e. one ontology). In future work we plan to capture multiple ways of describing the events to be modelled by involving representatives of different social groups (stakeholders) in the initial model-building process. Multiple ontologies can lead to multiple ways of generating data from a simulation (or possibly even multiple simulations). Analysis of simulation predictions and real world observations is then not dependent on a single interpretation. Therefore the fault-tolerance of the system can be enhanced.

5 Conclusion As part of the AIMSS project we have developed a simple prototype demonstrating some of the features required for the assistant agent architecture presented in an earlier

AIMSS: An Architecture for Data Driven Simulations in the Social Sciences

1105

study. Although this prototype is far too simple to be used operationally to generate real-world models, it serves as a proof-of-concept and can be used as a research tool by social scientists to help with exploratory model building and testing. Future work will involve the development of more realistic simulations, as well as the use of a wider range of data analysis and machine learning tools.

Acknowledgements This research is supported by the Economic and Social Research Council as an e-Social Science feasibility study.

References 1. Kennedy, C., Theodoropoulos, G.: Intelligent Management of Data Driven Simulations to Support Model Building in the Social Sciences. In: Workshop on Dynamic Data-Driven Applications Simulation at ICCS 2006, LNCS 3993, Reading, UK, Springer-Verlag (May 2006) 562–569 2. Kennedy, C., Theodoropoulos, G., Ferrari, E., Lee, P., Skelcher, C.: Towards an Automated Approach to Dynamic Interpretation of Simulations. In: Proceedings of the Asia Modelling Symposium 2007, Phuket, Thailand (March 2007) 3. Birkin, M., Turner, A., Wu, B.: A Synthetic Demographic Model of the UK Population: Methods, Progress and Problems. In: Second International Conference on e-Social Science, Manchester, UK (June 2006) 4. Brogan, D., Reynolds, P., Bartholet, R., Carnahan, J., Loitiere, Y.: Semi-Automated Simulation Transformation for DDDAS. In: Workshop on Dynamic Data Driven Application Systems at the International Conference on Computational Science (ICCS 2005), LNCS 3515,, Atlanta, USA, Springer-Verlag (May 2005) 721–728 5. Agrawal, R., Srikant, R.: Fast Algorithms for Mining Association Rules in Large Databases. In: Proceedings of the International Conference on Very Large Databases, Santiage, Chile: Morgan Kaufmann, Los Altos, CA (1994) 478–499 6. Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques. Elsevier, San Fransisco, California (2005) 7. Padmanabhan, B., Tuzhilin, A.: A Belief-Driven Method for Discovering Unexpected Patterns. In: Knowledge Discovery and Data Mining. (1998) 94–100 8. Moskewicz, M., Madigan, C., Zhao, Y., Zhang, L., Malik, S.: Chaff: Engineering an Efficient SAT Solver. In: Design Automation Conference (DAC 2001), Las Vegas (June 2001) 9. Mitchell, M.: An Introduction to Genetic Algorithms. MIT Press (1998) 10. Jain, A.K., Murty, M.N., Flynn, P.J.: Data Clustering: A Review. ACM Computing Surveys 31(3) (September 1999) 11. Michalski, R.S., Stepp, R.E.: Learning from Observation: Conceptual Clustering. In Michalski, R.S., Carbonell, J.G., Mitchell, T.M., eds.: Machine Learning: An artificial intelligence approach. Morgan Kauffmann, Palo Alto, CA:Tioga (1983) 331–363

Bio-terror Preparedness Exercise in a Mixed Reality Environment Alok Chaturvedi, Chih-Hui Hsieh, Tejas Bhatt, and Adam Santone Purdue Homeland Security Institute, Krannert School of Management, 403 West State Street, Purdue University, West Lafayette, IN 47907-2014 {alok, hsiehc, tejas, santone}@purdue.edu

Abstract. The paper presents a dynamic data-driven mixed reality environment to complement a full-scale bio-terror preparedness exercise. The environment consists of a simulation of the virtual geographic locations involved in the exercise scenario, along with an artificially intelligent agent-based population. The crisis scenario, like the epidemiology of a disease or the plume of a chemical spill or radiological explosion, is then simulated in the virtual environment. The public health impact, the economic impact and the public approval rating impact is then calculated based on the sequence of events defined in the scenario, and the actions and decisions made during the full-scale exercise. The decisions made in the live exercise influence the outcome of the simulation, and the outcomes of the simulation influence the decisions being made during the exercise. The mixed reality environment provides the long-term and large-scale impact of the decisions made during the full-scale exercise.

1

Introduction

The Purdue Homeland Security Institute (PHSI) created a Dynamic DataDriven Mixed Reality Environment to support a full-scale bio-terror preparedness exercise. In a mixed reality environment certain aspects of the scenario are conducted in the live exercise, while others are simulated. Actions and outcomes in the live exercise influence the simulated population, and the actions and outcomes of the simulation affect the lessons learned. The simulation modeled the public health aspect of the virtual population, as well as the economy of the virtual geographies. The artificial population would also voice a public opinion, giving a measure of support for the decisions and actions the government is taking on their behalf. The simulation provided the capability to analyze the impact of the crisis event as well as the government response. With such powerful capabilities, there are numerous advantages to using the simulation to augment the live exercise. The simulation allows us to scale the scenario to a much larger geographical area than possible with just a live exercise, thereby allowing key decision makers to keep the bigger picture in mind. 

This research was partially supported by the National Science Foundation’s DDDAS program grant # CNS-0325846 and the Indiana State 21st Century Research and Technology award #1110030618.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1106–1113, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Bio-terror Preparedness Exercise in a Mixed Reality Environment

1107

The simulation can execute in faster-than-real-time, allowing the participants to analyze the long-term impacts of their actions in a matter of minutes. The simulation also provides the ability to move forward and backward in virtual time, to analyze possible future implications of current actions, or to go back and retry the response to achieve better results. In the future we hope to allow the participants to have greater interaction with the simulation. The participants would receive continuously updated statistics from the simulation and the live exercise. This will allow them to make more strategic decisions on the scenario. With hands-on simulation training provided, or technical support staff taking actions on behalf of the participants, more time can be spent analyzing the results of the simulation than dealing with the simulation itself. The simulation is intended to be used as a tool for discussion of critical issues and problems in a response and recovery scenario. With the live exercise and the simulation connecting in real-time, the accuracy of the simulation will greatly improve, thereby providing more meaningful information to the key players.

2

Computational Modeling

The computational modeling is based on the Synthetic Environments for Analysis and Simulation (SEAS) platform. SEAS provides a framework that is unbiased to any one specific scenario, model, or system and can be used to represent fundamental human behavior theories without restrictions of what can be modeled common in modern simulation efforts. The enabling technology leverages recent advances in agent-based distributed computing to decouple control as well as data flow. SEAS is built from a basis of millions of agents operating within a synthetic environment. Agents emulate the attributes and interactions of individuals, organizations, institutions, infrastructure, and geographical decompositions. Agents join together to form networks from which evolve the various cultures of the global population. Intricate relationships among political, military, economic, social, information and infrastructure (PMESII) factors emerge across diverse granularities. Statistics calculated from the simulation are then used to provide measurable evaluations of strategies in support of decision making. The fundamental agent categories in SEAS are the individuals, organizations, institutions, and infrastructure (IOIIG). The population agents of these fundamental types will form higher order constructs in a fractal-like manner, meaning sufficient detail exists at multiple levels of focus, from world constructs to individuals. Higher order constructs include political systems (type of government, political parties/factions), militaries (soldiers, institutions, branches of service), economic systems (formal banking networks and black-market structures), social systems (tribes, religious groups, neighborhoods) and information systems (print, broadcast, internet). Agents representing individuals are used to model the populace in the synthetic environment. Individual agents are categorized into citizen and leader agents. An individual’s well being is based on a model consisting of eight fundamental needs: basic, political, financial, security,

1108

A. Chaturvedi et al.

religious, educational, health, and freedom of movement. The desire and perceived level of each of the well being categories are populated taking into account the socio-economic class of the individual the agent represents. Citizen agents are constructed as a proportional representation of the societal makeup of a real nation. A citizen agent consists of a set of fundamental constructs: traits, well being, sensors, goals, and actions. The traits of citizen agents, such as race, ethnicity, income, education, religion, gender, and nationalism, are configured according to statistics gathered from real world studies. Dynamic traits, such as religious and political orientations, emotional arousal, location, health, and well being, result during simulation according to models that operate on the citizen agents and interactions they have with other agents. The traits and well being determine the goals of a citizen agent. Each citizen agent ”senses” its environment, taking into account messages from leaders the citizen has built a relationship with, media the citizen subscribes to, and other members in the citizen’s social network. Each citizen agent’s state and goals can change as a result of interactions the citizen has with its environment. A citizen agent can react to its environment by autonomously choosing from its repertoire of actions. Additionally, a citizen agent’s set of possible actions can change during the course of the simulation, such as when a citizen agent resorts to violence. Traits, well-being, sensors, and actions together determine the behavior of the citizen agent. Clusters of citizen and leader agents form organizations. Citizen agents voluntarily join organizations due to affinity in perspective between the citizens and the organization. An organization agent’s behavior is based on a foundation consisting of the desires of the organization’s leaders and members. Organizational leadership constantly seeks maintenance and growth of the organizational membership by providing tangible and intangible benefits, and citizens subscribe based on a perceived level of benefit that is received from the organization. Additionally, through inter-organization networks, attitudes and resources may be shared among organizations. Through these internal and external interactions, organizations cause significant changes in perception and attitude change and become core protagonists of activism in the model. In turn, an organization exercises its power through the control over its resources and its ability to procure and maintain its resource base. Institution agents are represented as ’governmental entities’ such as the army, police, legislature, courts, executive, bureaucracy, and political parties-entities that are able to formulate policies that have legal binding, and have more discretionary resources. SEAS models institutions as structures that are products of individual choices or preferences, being constrained by the institutional structures (i.e. an interactive process). Institutions are like formal organizations with an additional power to influence the behaviors of members and non-members. Media agents also play a significant role in providing information to other agents in the form of reports on well-being and attitudes. Media organizations consist of television, radio, newspapers, and magazines. The media make choices of what information to cover, who to cover, what statements to report, what

Bio-terror Preparedness Exercise in a Mixed Reality Environment

1109

story elements to emphasize and how to report them. Incidents are framed on well-being components, and formalized in a media report. Media is able to set the agenda for domestic policies as well as foreign policy issues. Citizens subscribe to media organizations based on their ideological bend. Media organizations act primarily to frame the issues for their audiences in such a way that they increase their viewer-ship as well as their influence. Agents interact with the environment and respond, i.e., take action, to exogenous variables that may be specified by human agents or players in the environment as well as inputs from other agents. This is implemented with the introduction of inputs and outputs that each agent possesses. Inputs consist of general ”environmental sensors” as well as particular ”incoming message sensors.” The incoming message sensors are singled out because of the importance ascribed to each agent to be able to query messages from the environment discriminately. The agent also possesses ports characterized collectively as ”external actions” that allow the agent to submit its actions or messages to the environment. Finally, the agent possesses an internal set of rules classified as ”internal actions” that result in the agents ”external actions” on the basis of the sensor inputs as well as the traits/attributes and intelligence structure of each agent. 2.1

Virtual Geographies

The simulation will consist of a fictitious community that contains all the relevant features (hospitals, railways, airports, lakes, rivers, schools, business districts) from any Indiana community. This fictitious community can be customized to mimic a real community within Indiana. The virtual geography may be divided into high population density residential areas, low population density residential areas, commercial areas as well as uninhabitable areas. There can be various level of granularity for different communities as needed for the scenario (from international, national, state, district, county, city to city block levels). 2.2

Computational Epidemiology of Synthetic Population

The virtual community will have a virtual population represented by artificial agents. An agent is able to represent the activity of a human through a combination of learned variables and interactions. Research has shown that these agents act as the vertices of a societal network, and that their interactions comprise the edges of the network [Wasserman, 1994]. Like living beings, each agent has different interactions and experiences, and thus acts differently when faced with a situation. And while these evolving differences are essential for a useful simulation, certain predefined traits are also necessary. As an example, though all students in a class may be exposed to a flu virus, certain members will be more susceptible, and case severity will differ among those who contract the illness. For this reason, parameters must be assigned that define the susceptibility of an agent to a given pathogen. The high number of relevant attributes for each agent serves to differentiate each agent from its peers. But as the artificial agents grow in complexity, they must also grow in number, in order to maintain the

1110

A. Chaturvedi et al.

characteristics of the society they seek to describe. Once the society has been sufficiently populated, the artificial agents begin to interact with and learn from each other, forming an environment well suited for analysis and interaction by human agents. In addition to these behaviors, each agent is endowed with certain characteristics that help to differentiate the population. These attributes help to model the variability in human response to a situation. As an example, a wealthier individual may be more likely to leave a high-risk area, if only because of the financial independence he or she enjoys. The following is a partial list of characteristics that serve to differentiate one artificial agent from another: Age, Sex, Income, Education, and Health. The decision-making process for an artificial agent is a simplified version of the decision making process of rational humans. When faced with a decision, rational humans consider numerous variables. Using a combination of intuition, experience, and logic, one selects the alternative that leads to a certain goal - usually happiness. And while different decisions vary in magnitude, the underlying cognitive model remains relatively constant. As such, while different physical or psychological needs take precedence in different situations, the human decision-making process can be modeled by considering each need in a hierarchical manner. To illustrate, scholarship has shown that, when presented with a threatening environment, the primary focus of a living being shifts to ensuring its own survival. The list that follows partially describes the variables that an artificial agent considers before making a decision: Security, Information Level, Health, Basic Necessities, Mobility and Freedom, Financial Capability, and Global Economy. In the SEAS environment, as in the real world, reproductive rates and propagation vary according to the type of disease. Similarly, variables such as population density, agent mobility, social structure, and way of life interact to determine the proliferation of the disease. The government officials, or human agents, interact with the system and control propagation via such means as vaccination, treatment, or agent isolation. The options available to the human agents are the same as in real life, and the effectiveness of these interactions is modeled using statistically verified historical information [Longini, et. al 2000]. 2.3

Public Opinion Model

While the safety of the artificial agents takes highest precedence, government officials must consider the overall spirit of the population when making decisions. To illustrate, though safety may be maximized by quarantining a city in every instance of potential attack [Kaplan, Craft & Wein, 2002], such restrictive measures may not be tolerated by the population. To enhance the level of learning they can achieve through the simulation, the human agents must consider the impact on public sentiment that each of their decisions may have. As in real life, each artificial agent determines his or her happiness level using a combination of variables: Current health status - must be alive in order to hold an opinion; perceived security; information level; basic necessities; and freedom of mobility.

Bio-terror Preparedness Exercise in a Mixed Reality Environment

2.4

1111

Economic Impact Model

The simulation projects the long term economic impact of the crisis scenario. The economic impact of the crisis scenario as well as the government response is modeled based on the following criteria: Loss from impact on public health, cost of response, loss of productivity, loss of business, and loss from impact on public opinion.

3

Application: Indiana District 3 Full-Scale Exercise 2007

For the Indiana District 3 full-scale exercise in January 2007, the crisis scenario involved the intentional release of aerosolized anthrax at a fictitious district-wide festival. The following section describes the district-wide results of the simulation in more detail. 3.1

Detailed Public Health Statistics for District 3

As shown in Fig. 1, the scenario assumed that 22,186 people were initially exposed to the anthrax. As the epidemiology of anthrax initiates, 17,693 people who were not exposed to enough anthrax spores move into the recovered health status. The remaining 4,493 people started showing symptoms and falling sick, with a brief period of reduction in symptoms, followed by high fever and an 80% chance of shock and death. The remaining 892 people eventually recovered. Note: This graph is for the no-intervention case (assuming no mass prophylaxis was started).

Fig. 1. Disease progression within the population of District 3 over a period of 3 weeks

3.2

Economic Impact for District 3

Even with a strong public health response, the District would have to deal with a tremendous economic impact (close to 400 million dollars in the long term) due to the crisis situation, as seen from Fig. 2.

1112

A. Chaturvedi et al.

Fig. 2. The economic loss is approximately 900 million dollars less than worst case

3.3

Public Opinion Impact for District 3

In the worst case scenario, the population became aware of the crisis when people started to die from the anthrax exposure, as seen in Fig. 3. Hence, public opinion drops at a later date than when the government announces plans for mass prophylaxis. Even though public opinion dropped sooner, it did not go as low as the worst case scenario, due to the proactive and efficient government response.

Fig. 3. The public opinion is 21.47% more in approval of the government than the worst case scenario due to the government response in curbing the situation

4

Conclusion

The quick response of the local agencies participating in the exercise resulted in fewer casualties in all counties within District 3. The counties were quick to

Bio-terror Preparedness Exercise in a Mixed Reality Environment

1113

determine the shortage of regimens available to them locally, and in requesting additional regimens from the state. All the counties would have been able to complete the simulated mass prophylaxis of the exposed population within the timeline guided by the CDC (within 48 hours after exposure) based on their survey responses. This created a dramatic difference (saving 2,505 lives) in the public health statistics of District 3 as compared to the worst case. While the worst case economic loss would have been around 1.3 billion dollars for District 3, the estimated economic loss for the District 3 Exercise was only 392 million due to the government response and public health actions. Enormous long term economic loss could have crippled the entire district, if the crisis was not handled properly. This situation was avoided during the District 3 Exercise due to positive and efficient government actions. Initial drop in public opinion was due to the inability of the government to prevent the terror attack from taking place, however, in the long term, the government response and gain of control over the situation, stabilized the public opinion. Based on the public opinion results, it would take some time before the public opinion would come back to normal levels - it likely would take aggressive media campaigns and public service announcements, by the public information officers as well as the elected officials, to mitigate the general state of panic and fear of such an attack happening again.

References 1. Data from CDC website (http://www.bt.cdc.gov/agent/anthrax/anthrax-hcpfactsheet.asp) 2. Data from FDA website (http://www.fda.gov/CBER/vaccine/anthrax.htm) 3. Reducing Mortality from Anthrax Bioterrorism: Strategies for Stockpiling and Dispensing Medical and Pharmaceutical Supplies. Dena M. Bravata. Table 1 4. Systematic Review: A Century of Inhalational Anthrax Cases from 1900 to 2005 Jon-Erik K. Holty P.g: 275 5. Center for Terrorism Risk Management Policy (http://www.rand.org) 6. The Roper Center for Public Opinion Research at the University of Connecticut 7. E.H. Kaplan, D.L. Craft and L.M. Wein, Emergency response to a smallpox attack: The case for mass vaccination, PNAS, 2002 8. I.M. Longini, M. Elizabeth Halloran, A. Nizam, et al., Estimation of the efficacy of life, attenuated influenza vaccine from a two-year, multi-center vaccine trial: implications for influenza epidemic control, Vaccine, 18 (2000) 1902-1909

Dynamic Tracking of Facial Expressions Using Adaptive, Overlapping Subspaces Dimitris Metaxas, Atul Kanaujia, and Zhiguo Li Department of Computer Science, Rutgers University {dnm,kanaujia,zhli}@cs.rutgers.edu

Abstract. We present a Dynamic Data Driven Application System (DDDAS) to track 2D shapes across large pose variations by learning non-linear shape manifold as overlapping, piecewise linear subspaces. The learned subspaces adaptively adjust to the subject by tracking the shapes independently using Kanade Lucas Tomasi(KLT) point tracker. The novelty of our approach is that the tracking of feature points is used to generate independent training examples for updating the learned shape manifold and the appearance model. We use landmark based shape analysis to train a Gaussian mixture model over the aligned shapes and learn a Point Distribution Model(PDM) for each of the mixture components. The target 2D shape is searched by first maximizing the mixture probability density for the local feature intensity profiles along the normal followed by constraining the global shape using the most probable PDM cluster. The feature shapes are robustly tracked across multiple frames by dynamically switching between the PDMs. The tracked 2D facial features are used deform the 3D face mask.The main advantage of the 3D deformable face models is the reduced dimensionality. The smaller number of degree of freedom makes the system more robust and enables capturing subtle facial expressions as change of only a few parameters. We demonstrate the results on tracking facial features and provide several empirical results to validate our approach. Our framework runs close to real time at 25 frames per second.

1

Introduction

Tracking deformable shapes across multiple viewpoints is an active area of research and has many applications in biometrics, facial expressions analysis and synthesis for deception, security and human-computer interaction applications. Accurate reconstruction and tracking of 3D objects require well defined delineation of the object boundaries across multiple views. Landmark based deformable models like Active Shape Models(ASM)[1]have proved effective for object shape interpretation in 2D images and have lead to advanced tools for statistical shape analysis. ASM detects features in the image by combining prior shape information with the observed image data. A major limitation of ASM is that it ignores the non-linear geometry of the shape manifold. Aspect changes of 3D objects causes shapes to vary non-linearly on a hyper-spherical manifold. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1114–1121, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Dynamic Tracking of Facial Expressions

1115

A generic shape model that would fit any facial expression is difficult to train, due to numerous possible faces and relative feature locations. In this work we present a generic framework to learn non-linear shape space as overlapping piecewise linear subspaces and then dynamically adapting the shape and appearance model to the Face of the subject. We do this by accurately tracking facial features across large head rotations and re-training the model specific to the subject using the unseen shapes generated from KLT tracking. We use the Point Distribution Models(PDM) to represent the facial feature shapes and use ASM to detect them in the 2D image. Our generic framework enables large scale automated training of different shapes from multiple viewpoints. The shape model is composed of the Principal Components that account for most of the variations arising in the data set. Our Dynamic Data Driven framework continuously collects different shapes by tracking feature points independently and adjusts the principal components basis to customize it for the subject.

2

Related Work

A large segment of research in the past decade has focused on incorporating nonlinear statistical models for learning shape manifold. Murase et. al. [2] showed that pose from multiple viewpoint when projected onto eigenspaces generates a 2D hypersphere manifold. Gong et. al [3] used non-linear projections onto the eigenspace to track and estimate pose from multiple viewpoints. Romdhani et al. [4] proposed an ASM based on Kernel PCA to learn shape variation of face due to yaw. Several prominent work exist on facial feature registration and tracking, use appearance based models(AAM)[5,6]. [5] uses multiple independent 2D AAM models to learn correspondences between features of different viewpoints. The most notable work in improving ASM to learn non-linearities in the training data is by Cootes et. al[7] in which large variation is shapes is captured by parametric Gaussian mixture density, learned in the principal subspace. Unlike [5], our framework does not require explicit modeling of head pose angles. Although we use multivariate gaussian mixture model to learn initial clusters of the shape distribution, our subspaces are obtained by explicitly overlapping the clusters.

3

Learning Shape Manifold

An Active Shape Model(ASM) is a landmark based model that tries to learn a statistical distribution over variations in shapes for a given class of objects. Changes in viewpoint causes the object shapes to lie on a hyper-sphere and cannot be accurately modeled using linear statistical tools. Face shape variation across multiple aspects is different across human subjects. It is therefore inaccurate to use a static model to track facial features for different subjects. Our approach to dynamically specialize the learned shape manifold to a human subject provides an elegant solution to this problem. However tracking shapes across multiple aspects requires modeling and synthesis of paths between the source and target shapes lying on a non-linear manifold. In

1116

D. Metaxas, A. Kanaujia, and Z. Li

our framework non-linear region is approximated as a combination of multiple smaller linear subregions. For the first frame, we search the shape subspace iteratively by searching along the normals of the landmark points and simultaneously constraining it to lie on the shape manifold. The path between the source shape and the target shape is traversed by searching across multiple subspaces that constitute the non-linear shape surface. For the subsequent frames, we track the facial features independent of the prior shape model. The tracked shapes are used to learn Principal Components of the shape and appearance models that capture the variations specific to the human subject face. As a pre-requisite for shape analysis, all the 2D planar shapes are aligned to the common co-ordinate system using Generalized Procrustes Analysis[8]. The tangent space approximation Ts projects the shapes on a hyper-plane normal to the mean vector and passing through it. Tangent space is a linear approximation of the general shape space so that the Procrustes distance can be approximated as euclidean distance between the planar shapes. The cluster analysis of shape is done in the global tangent space. We assume a generative multivariate Gaussian mixture distribution for both the global shapes and the intensity profile models(IPMs). The conditional density of the shape Si belonging to an N-class model p(Si |Cluster) = N 

N 1 γj (2π)−( 2 ) Cj −1/2 exp{− (Si −(μj +Pj bj ))T Cj −1 (Si −(μj +Pj bj ))} (1) 2 j=1

We also assume a diagonal covariance matrix Cj . γj are the cluster weights and (μj , Pj , bj ) are the mean, eigen matrix and eigen coefficients respectively for the principle subspace defined for each cluster. The clustering can be achieved by the EM algorithm with variance flooring to ensure sufficient overlapping between the clusters. For each of the N clusters we learn a locally linear PDM using PCA and using the eigenvectors to capture significant variance in the cluster(98%). The intensity profiles for the landmark points also exhibit large variation when trained over multiple head poses. The change in face aspects causes the profiles to vary considerably for the feature points that are occluded. The multivariate Gaussian mixture distribution(1) is learned for the local intensity profiles model(IPM) in order to capture variations that cannot be learned using a single PCA model. Overlapping Between Clusters: It is important that the adjacent clusters overlap sufficiently to ensure switching between subspaces during image search and tracking. We can ensure subspace overlap by using boundary points between adjacent clusters to learn the subspace for both the clusters. These points can be obtained as nearest to the cluster center but not belonging to that cluster.

4

Image Search in the Clustered Shape Space

Conventional ASM uses an Alternating Optimization(AO) technique to fit the shape by searching for the best matched profile along the normal followed by constraining the shape to lie within the learned subspace. The initial average

Dynamic Tracking of Facial Expressions

1117

Fig. 1. Iterative search across multiple clusters to fit the face. The frames correspond to iteration 1(Cluster 1), iter. 3(Cluster 5), iter. 17(Cluster 7), iter. 23(Cluster 6) and final fit at iter. 33(Cluster 6) for level 4 of the Gaussian pyramid.

shape is assumed to be in a region near to the target object. We use robust Viola-Jones face detector to extract a bounding box around the face and use its dimensions to initialize the search shape. The face detector has 99% detection rate for faces with off-plane and in-plane rotation angles ±30o. We assign the nearest Clusteri to the average shape based on the mahalanobis distance between the average shape and the cluster centers in the global tangent space. The image search is initiated at the top most level of the pyramid by searching IPM along normals and maximizing the mixture probability density (1) of the intensity gradient along the profile. The model update step shifts the shape to the current cluster subspace√by truncating the eigen coefficients to lie within the allowable variance as ±2 λi . The shape is re-assigned the nearest cluster based on the mahalanobis distance and the shape coefficients are re-computed if the current subspace is different from the previous. The truncation function to regularize the shapes usually generates discontinuous shape estimates. We use the truncation approach, due to its low computational requirement and faster convergence. The above steps are performed iteratively and converges irrespective of the initial cluster of the average shape.

5

Dynamic Data Driven Tracking Framework

We track the features independent of the ASM by Sum of Squared Intensity Difference(SSID) tracker across consecutive frames[9]. The SSID tracker is a method for registering two images and computes the displacement of the feature by minimizing the intensity matching cost, computed over a fixed sized window around the feature. Over a small inter-frame motion, a linear translation model can be accurately assumed. For an intensity surface at image location I(xi , yi , tk ), the tracker estimates the displacement vector d = (δxi , δyi ) from new image I(xi + δx, yi + δy, tk+1 ) by minimizing the residual error over a window W around (xi , yi ) [9].  [I(xi + δx, yi + δy, tk+1 ) − g.d − I(xi , yi , tk )] dW (2) W

The inter-frame image warping model assumes that for small displacements of intensity surface of image window W, the horizontal and vertical displacement of the surface at a point (xi , yi ) is a function of gradient vector g at that point.

1118

D. Metaxas, A. Kanaujia, and Z. Li 0.5 Cluster 1

Principal Component 2

0.4

Cluster 5 Cluster 6

0.3

Cluster 7 0.2

Frames Shown

0.1 0 −0.1 −0.2 −0.3

−0.2

−0.1

0 0.1 Principal Component 1

0.2

0.3

0.4

Fig. 2. (Best Viewed in Color)Tracking the shapes across right head rotation.(Top) The cluster projections on 2D space using 2 principal modes(for visualization)and the bounded by hyper-ellipsoid subspace. The right head rotation causes the shape to vary across the clusters. The red circles corresponds to the frames 1, 49, 68, 76, 114, 262 and 281. The entire tracking path lies within the subspace spanned by the hyperellipsoids.(Bottom) The images of the tracking result for the frames shown as red markers in the plot.

The tracking framework generates a number of new shapes not seen during the training for ASM and hence provides independent data for our dynamic data driven application systems. Both the appearance (IPMs) and the shape models are composed of Principal Vector basis that are dynamically updated as we obtain new shapes and IPMs for the landmark points. For the shape Xi+1 at time step (i + 1), the covariance matrix Ci , is updated as Ci+1 = ((N + i) −

K K ) ∗ Ci + ∗ Xi+1 T Xi+1 N +i N +i

(3)

where N is the number of training examples and i is the current tracked frame. The updated covariance matrix Ci+1 is diagonalized using power method to obtain new set of basis vectors. The subspace corresponding to these basis vectors encapsulates the unseen shape. The sequence of independent shapes and IPMs for the landmarks are used to update the current and neighboring subspaces, and the magnitude of updates can be controlled by the predefined learning rate K. The number of PCA basis vectors(eigenvectors) may also vary as a result of updation and specialization of the shape and the appearance model. Fig. 3 illustrates the applicability of our adaptive learning methodology to extreme facial expressions of surprise, fear, joy and disgust (not present in training images). For every frame we align the new shape Yt to the global average shape Xinit and re-assign it to the nearest Clusteri based on mahalanobis distance. Finally after every alternate frame we ensure that the shape Yt obtained from tracking is a plausible shape by constraining the shape to lie on the shape manifold of the current cluster. Fig. 2 shows the path (projection on 2 principal components) of a shape(and

Dynamic Tracking of Facial Expressions

1119

Fig. 3. 2D Tracking for extreme facial expressions

the corresponding cluster) for a tracking sequence when the subject rotates the head from frontal to full right profile view and back. The entire path remains within the plausible shape manifold spanned by the 9 hyper-ellipsoid subspaces.

6

Deformable Model Based 3D Face Tracking

Deformable model based 3D face tracking is the process of estimation, over time, of the value of face deformation parameters (also known as the state vector of the system) based on image forces computed from face image sequences. Our objective is to build a dynamically coupled system that can recover both the rigid motion and deformations of a human face, without the use of manual labels or special equipment. The main advantage of deformable face models is the reduced dimensionality. The smaller number of degree of freedom makes the system more robust and efficient, and it also makes post-processing tasks, such as facial expression analysis, more convenient based on recovered parameters. However, the accuracy and reliability of a deformable model tracking application is strongly dependent on accurate tracking of image features, which act as 2D image force for 3D model reconstruction. Low level feature tracking algorithms, such as optical flows, often suffer from occlusion, unrealistic assumptions etc. On the other hand, model based 2D feature extraction method, such as active shape model, has been shown to be less prone to image noises and can deal with occlusions. In this paper, we take advantage of the coupling of the 3D deformable model and 2D active shape model for accurate 3D face tracking. On the one hand, 3D deformable model can get more reliable 2D image force from the 2D active shape model. On the other hand, 2D active shape model will benefit from the good initialization provided by the 3D deformable model, and thus improve accuracy and speed of 2D active shape model. The coupled system can handle large rotations and occlusions. A 3D deformable model is parameterized by a set of parameters q. Changes in q causes geometric deformations of the model. A particular point on the surface is denoted by x(q; u) with u ∈ Ω. The goal of a shape and motion estimation process is to recover parameter q from face image sequences. To distinguish between shape estimation and motion tracking, the parameters q can be divided into two parts: static parameter qs , which describes the unchanging features of a particular face, and dynamic parameter qm , which describes the global (rotation and translation of the head) and local deformations (facial expressions) of an observed face during tracking. The deformations can also be divided into two parts: Ts for shape and Tm for motion (expression), such

1120

D. Metaxas, A. Kanaujia, and Z. Li

Fig. 4. 3D tracking results of deformable mask with large off-plane head rotations

˙ that x(q; u) = Tm (qm ; Ts (qs ; s(u))) The kinematics of the model is x(u) = ˙ where L = ∂x L(q; u)q, is the model Jacobian. Considering the face images ∂q under a perspective camera with focal length f , the point x(u) = (x, y, z)T projects to the image point xp (u) = fz (x, y)T . The kinematics of the new model is given by: x˙p (u) =

∂xp ∂xp ˙ x(u) =( L(q; u))q˙ = Lp (q; u)q˙ ∂x ∂x

where the projection Jacobian matrix is   ∂xp f /z 0 −f x/z 2 = 0 f /z −f y/z 2 ∂x

(4)

(5)

which converts the 2D image forces to 3D forces. Estimation of the model parameters q is based on first order Lagrangian dynamics [10], q˙ = fq Where the generalized forces fq are identified by the displacements between the actual projected model points and the identified corresponding 2D image features, which in this paper are the 2D active shape model points. They are computed as:  (Lp (uj )T fimage (uj )) (6) fq = j

Given an adequate model initialization, these forces will align features on the model with image features, thereby determining the object parameters. The dynamic system is solved by integrating over time, using standard differential equation integration techniques: ˙ q(t + 1) = q(t) + q(t)Δt

(7)

Goldenstein et. al showed in [11] that the image forces fimage and generalized forces fq in these equations can be replaced with affine forms that represent probability distributions, and furthermore that with sufficiently many image forces, the generalized force converges to a Gaussian distribution. In this paper, we take advantage of this property by integrating the contributions of ASMs with other cues, so as to achieve robust tracking even when ASM methods and standard 3D deformable model tracking methods provide unreliable results by themselves.

Dynamic Tracking of Facial Expressions

7

1121

Conclusion

In this work we have presented a real time DDDAS framework for detecting and tracking deformable shapes across non-linear variations arising due to aspect changes. Detailed analysis and empirical results have been presented about issues related to the modeling non-linear shape manifolds using piecewise linear models. The shape and appearance model updates itself using new shapes obtained from tracking the feature points. The tracked 2D features are used to deform the 3D face mask and summarize the facial expressions using only a few parameters. This framework has many application in face-based deception analysis and we are in the process of performing many tests based on relevant data.

Acknowledgement This work has been supported in part by the National Science Foundation under the following two grants NSF-ITR-0428231 and NSF-ITR-0313184. Patent Pending The current technology is protected by patenting and trade marking office, ”System and Method for Tracking Facial Features,”, Atul Kanaujia and Dimitris Metaxas, Rutgers Docket 07-015, Provisional Patent #60874, 451 filed December, 12 2006. No part of this technology may be reproduced or displayed in any form without the prior written permission of the authors.

References 1. Cootes, T.: An Introduction to Active Shape Models. Oxford University Press (2000) 2. Murase, H., Nayar, S.: Learning and recognition of 3D Objects from appearance. IJCV (1995) 3. Gong, S., Ong, E.J., McKenna, S.: Learning to associate faces across views in vector space of similarities to prototypes. BMVC (1998) 4. Romdhani, S., Gong, S., Psarrou, A.: A Multi-View Nonlinear Active Shape Model Using Kernel PCA. BMVC (1999) 5. Cootes, T., Wheeler, G., Walker, K., Taylor, C.: View-Based Active Appearance Models. BMVC (2001) 6. Edwards, G.J., Taylor, C.J., Cootes, T.F.: Learning to Identify and Track Faces in Image Sequences. BMVC (1997) 7. Cootes, T., Taylor, C.: A mixture model for representing shape variation. BMVC (1997) 8. Goodall, C.: Procrustes methods in the statistical analysis of shape. Journal of the Royal Statistical Society (1991) 9. Tomasi, C., Kanade, T.: Detection and Tracking of Point Features. Technical Report CMU-CS-91-132 (1997) 10. Metaxas, D.: Physics-Based Deformable Models: Applications to Computer Vision, Graphics and Medical Imaging. Kluwer Academic Publishers (1996) 11. Goldenstein, S., Vogler, C., Metaxas, D.: Statistical Cue Integration in DAG Deformable Models. PAMI (2003)

Realization of Dynamically Adaptive Weather Analysis and Forecasting in LEAD: Four Years Down the Road Lavanya Ramakrishnan, Yogesh Simmhan, and Beth Plale School of Informatics, Indiana University, Bloomington, IN 47045, {laramakr, ysimmhan, plale}@cs.indiana.edu

Abstract. Linked Environments for Atmospheric Discovery (LEAD) is a large-scale cyberinfrastructure effort in support of mesoscale meteorology. One of the primary goals of the infrastructure is support for real-time dynamic, adaptive response to severe weather. In this paper we revisit the conception of dynamic adaptivity as appeared in our 2005 DDDAS workshop paper, and discuss changes since the original conceptualization, and lessons learned in working with a complex service oriented architecture in support of data driven science. Keywords: Weather Analysis and Forecasting.

1

Introduction

Linked Environments for Atmospheric Discovery (LEAD)[2] 1 is a large-scale cyberinfrastructure effort in support of mesoscale meteorology. This is accomplished through middleware that facilitates adaptive utilization of distributed resources, sensors and workflows, driven by an adaptive service-oriented architecture (SOA). As an SOA, LEAD encapsulates both application and middleware functionality into services. These services include both atomic application tasks as well as resource and instrument monitoring agents that drive the workflow. The project is broad, with significant effort expended on important efforts such as education and outreach. LEAD was conceived in the early 2000’s in response to the then state-of-theart in meteorology forecasting. Forecasts were issued on a static, cyclic schedule, independent of current weather conditions. But important technology and science factors were converging to make it possible to transform weather forecasting by making forecast initiation automatic and responsive to the weather. The grid computing community was focused on web services as a scalable, interoperable architecture paradigm [12]. Research was occurring on one-pass data-mining algorithms for mesoscale phenomena [3]. The CASA Engineering Research Center 1

Funded by National Science Foundation under Cooperative Agreements: ATM0331594 (OU), ATM-0331591 (CO State), ATM-0331574 (Millersville), ATM0331480 (IU), ATM-0331579 (UAH), ATM03-31586 (Howard), ATM-0331587 (UCAR), and ATM-0331578 (UIUC).

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1122–1129, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Realization of Dynamically Adaptive Weather Analysis and Forecasting

1123

[4] was building small, inexpensive high resolution Doppler radars. Finally, largescale computational grids, such as TeraGrid, began to emerge as a community resource for large-scale distributed computations. In this paper we revisit the concept of dynamic adaptivity as was presented in our DDDAS workshop paper of 2005 [1], a conceptualization that has grown and matured. We discuss the facets of the model as they exist today, and touch on lessons learned in working with a complex SOA in support of data driven science.

2

System Model

Creating a cyberinfrastructure that supports dynamic, adaptive responses to current weather conditions requires several facets of dynamism. The service framework must be able to respond to weather conditions by detecting the condition then directing and allocating resources to collect more information about the weather and generate forecasts. Events also occur as execution or run-time phenomena: problems in ingesting data, in network failure, in the resource availability, and in inadequate model progress for instance. Adaptivity is driven by several key requirements: User-Initiated Workflows. A typical mode of usage of the LEAD system is user-initiated workflow through a portal (also known as “science gateway”) where a user composes a workflow or selects a pre-composed workflow and configures the computational components, and data selection criteria. In this scenario, the system needs mechanisms to procure resources and enable workflow execution, provide recovery mechanisms from persistent and transient service failures, adapt to resource availability, and recover from resource failures during workflow execution. Priorities of Workflows. The LEAD cyberinfrastructure simultaneously supports science research and educational use, so workflow prioritization must be supported. Consider the case of an educational LEAD workshop where resources have been reserved through out-of-band mechanisms for advanced reservation. Resource allocation needs to be based on existing load on the machines, resource availability, the user priorities and workflow load. The bounded set of resources available to the workshop might need to be proportionally shared among the workflow users. If a severe weather event were to occur during the workshop, resources might need to be reallocated and conflicting events might need some arbitration. Dynamic Weather Events. We consider the case of dynamic weather event detection with data mining. Users have the freedom to specify dynamic mining criteria from the portal, and use the triggers from detected weather phenomena as the basis for automated forecast initiation. This freedom creates resource arbitration issues. Multiple weather events and their severity might factor into assigning priorities between users for appropriate allocation of limited available resources, for instance.

1124

L. Ramakrishnan, Y. Simmhan, and B. Plale

Advanced User Workflow Alternatives. An advanced user has a workflow and provides a set of constraints (e.g., a time deadline) and the work to be done. Tradeoffs may have to be made to arbitrate resources. For instance, the user might be willing to sacrifice forecast resolution to get early results which might then define the rest of the workflow.

User expectations

Request user input

Application control plane Resource needs

Resource change

Resource adaptation plane

Portals or science gateways – user level interfaces to specify workflow DAGs and constraints. Workflow monitoring Control execution Resource status

Workflow tools – multiple application coordination.

Resource Management Services – provides functionality for job and file management.

Manage resources

Resource Control Plane – resource configuration, application middleware management.

Resources – clusters, sensors, radars, etc.

Fig. 1. Service and resource stack is controlled by application control plane interacting at workflow level; resource adaptation plane effects changes to underlying layers. Stream mining is a user-level abstraction, so executes as a node in a workflow.

The conceptualization of the system as reported in the 2005 DDDAS workshop paper [1] casts an adaptive infrastructure as an adaptation system that mirrors the forecast control flow. While a workflow executes services in the generation of a forecast, the adaptive system is busy monitoring the behavior of the system, the application, and the external environment. Through pushing events to a single software bus, the adaptive system interacts with the workflow system to enact appropriate responses to events. The model eventually adopted is somewhat more sophisticated. As shown in Figure 1, the external-facing execution is one of users interacting through the portal to run workflows. The workflows consume resources, and access to the resources is mediated by a resource control plane [11]. The adaptive components are mostly hidden from the user. A critical component of the application control plane is monitoring workflow execution (Section 4). The resource adaptation plane manages changes in resource allocation, in consultation with the application control plane, and in response to a number of external stimuli (Section 5). At the core of the LEAD architecture is a pair of scalable publish-subscribe event notification systems. One is a high-bandwidth event streaming bus designed to handle large amounts of distributed data traffic from instruments and other remote sources [5]. The second bus handles communication between the service components of the system. While not as fast as a specialized bus, it does not need to be. Its role is to be the conduit for the notifications related to the response triggers and all events associated with the workflow enactment as well as the overall state

Realization of Dynamically Adaptive Weather Analysis and Forecasting

1125

of the system [7]. This event bus is based on the WS-Eventing standard endorsed by Microsoft, IBM and others and is a very simple XML message channel.

3

Dynamic Data Mining

Dynamic weather event responsiveness is achieved by means of the Calder stream processing engine (SPE) developed at Indiana University [5] to provide on-thewire continuous query processing access, filtering, and transforming of data in data streams. Functionality includes access to a large suite of clustering data mining algorithms for detecting mesoscale weather conditions developed at the University of Alabama Huntsville [3]. The SPE model is a view of data streams as a single coherent data repository (a “stream store”) of indefinite streams, with provisioning for issuing SQL-like, continuous queries to access streams. The layer that transports stream data is a binary publish-subscribe system. Sensors and instruments are currently added to the stream network by a manual process of installing a point-of-presence in front of the instrument that converts events from the native format to the system’s XML format and pub-sub communication protocol. As an example of the SPE in LEAD, suppose a user wishes to keep an eye on the storm front moving into Chicago later that evening. He/she logs into the portal, and configures an agent to observe NEXRAD Level II radar data streams for severe weather developing over the Chicago region. The request is in the form of a continuous query. The query executes the mining algorithm repeatedly. Data mining will result in a response trigger, such as “concentration of high reflectivity found centered at lat=x, lon=y”. The Calder service communicates with other LEAD components using the WS-Eventing notification system. It uses an internal event channel for the transfer of data streams. The query execution engine subscribes to channels that stream observational data as events that arrive as bzipped binary data chunks and are broken open to extract metadata that is then stored as an XML event.

4

System Monitoring

The dynamic nature of the workflows and data products in LEAD necessitates runtime monitoring to capture the workflow execution trace including invocation of services, creation of data products, and use of computational, storage, and network resources. There are two key drivers to our monitoring: to gather a near real-time view of the system to detect and pinpoint problems, and for building an archive of process and data provenance to assist in resource usage prediction and intelligent allocation for future workflow runs. Orthogonally, there are three types of monitoring that are done: application resource usage monitoring, application fault monitoring, and quality of service monitoring. In this section we describe the monitoring requirements and detail the use of the Karma provenance system [6] in monitoring the LEAD system.

1126

L. Ramakrishnan, Y. Simmhan, and B. Plale

Monitoring Resource Usage. Resource usage monitoring provides information on resource behavior, predicted time for workflow completion, and helps guide the creation of new “soft” resources as a side-effect of workflow execution. Taking a top-down view, resource usage starts with workflows that are launched through the workflow engine. The workflows, which behave as services consume resources from the workflow engine, itself a service. Various services that form part of the workflow represent the next level of resources used. Services launch application instances that run on computational nodes consuming compute resources. Data transfer between applications consumes network bandwidth while staging files consumes storage resources. In addition to knowledge about the available resource set present in the system that might be available through a network monitoring tool and prior resource reservations, real-time resource usage information will give an estimate of the resources available for allocation less those that might be prone to faults. In case of data products, creation of replicas as part of the workflow run makes a new resource (data replica) available to the system. Similar “soft” resources are transient services that are created for one workflow but may be reused by others. Resource usage information also allows us to extrapolate into the future behavior aiding resource allocation decisions. Monitoring Application Faults. Dynamic systems have faults that take place in the applications plane and need appropriate action such as restarting the application from the last checkpoint, rerunning or redeploying applications. Faults may take place at different levels and it is possible that a service failure was related to a hardware resource failure. Hence sufficient correlation has to be present to link resources used across levels. Faults may take place in service creation because of insufficient hardware resources or permissions, during staging because of missing external data files or network failure, or during application execution due to missing software libraries. Monitoring Quality of Service. Monitoring also aids in ensuring a minimum quality of service guarantee for applications. Workflow execution progress is tracked and used to estimate completion time. If the job cannot finish within the window stated, it may be necessary to preempt a lower priority workflow. The quality of data is determined through real-time monitoring and the maintenance of a quality model [8]. Data quality is a function of, among other attributes, the objective quality of service for accessing or transferring the data as well as the subjective quality of the data from a user’s perspective, which may be configured for specific application needs. LEAD uses the Karma provenance system for workflow instrumentation and services to generate real-time monitoring events and for storing them for future mining. Karma defines an information model [6] built upon a process-oriented view of workflows, and a data-oriented view of product generation and consumption. Activities describe the creation and termination of workflows and services, invocation of services and their responses (or faults), data transferred, consumed and produced by applications, and computational resources used by applications. The activities help identify the level at which the activity took place (workflow,

Realization of Dynamically Adaptive Weather Analysis and Forecasting

1127

service, application) and the time through causal ordering, along with attributes that describe specific activities. For example, the data produced activity generated by a service would describe the application that generated the data, the workflow it was part of and the stage in the workflow, the unique ID for the data along with the specific URL for that replica, and the timestamp of creation. The activities are published as notifications using the publish-subscribe system that implements the WS-Eventing specification [7]. The Karma provenance service subscribes to all workflow related notifications and builds a global view of the workflow execution by stitching the activities together. One of the views exported by the provenance service through its querying API is the current state of a workflow in the form of an execution trace. This provides information about the various services and applications used by the workflow, the data and compute resources used, and the progress of the workflow execution. More finegrained views can also be extracted that details the trace of a single service or application invocation. The information collected can be mined to address the needs postulated earlier. In addition, the activity notifications can also be used directly to monitor the system in real-time.

5

Adaptation for Performability

In today’s grid and workflow systems, where nature of the applications or workflow is known apriori, resource management, workflow planning and adaptation techniques are based on performance characteristics of the application [9]. The LEAD workflows, in addition to having dynamic characteristics, also have very tight constraints in terms of time deadlines, etc. In these types of workflows, it is important to consider the reliability and timely availability of the underlying resources in conjunction with the performance of the workflow. Our goal is to adapt for performability, a term originally defined by J. Meyer [10]. Performability is used as a composite measure of performance and dependability, which is the measure of the system’s performance in the event of failures and availability. The bottom-up performability evaluation of grid resources and the top-down user expectations, workflow constraints or needs of the application guides the adaptation in the application control plane and the resource adaptation planes. Adaptation might include procuring additional resources than originally anticipated, changing resources and/or services for scheduled workflows, reaction to failures, fault-tolerance strategies, and so on. To meet the performability guarantees of the workflow, we propose a two-way communication in our adaptation framework, between the resource adaptation plane and the application control plane (see Figure 1). The application control plane interacts with the resource control plane to inquire about resource status and availability, select resources for workflow execution, and guide resource recruitment decisions. In turn, the application control plane needs information from the resource layer about resource status and availability and, during execution, about failures or changes in performance and reliability.

1128

L. Ramakrishnan, Y. Simmhan, and B. Plale

The adaptive system has workflow planner and controller components. The workflow planner applies user constraints and choices in conjunction with resource information to develop an “online” execution and adaptation plan. The workflow controller is the global monitoring agent that controls the run-time execution. The workflow planner comes up with an annotated plan to the original user-specified DAG that is then used by the workflow controller to monitor and orchestrate the progress of the workflow. The LEAD workflows have a unique set of requirements that drive different kinds of interaction between the workflow planner and the resource control plane. As discussed in section 2, the LEAD system is used for educational workshops. In this scenario, the workflow planner might need to distribute the bounded set of available resources among the workshop participants. It is possible during the course of the execution, additional resources become available which could be used by the existing workflows. When notified of such availability the workflow planning step can reconfigure the workflows to take advantage of the additional resources. In this scenario, the workflow adaptation will be completely transparent to the end user. The goal of the workflow controller is to control the schedule and the adaptation tasks of the workflow engine. The workflow controller can potentially receive millions of adaptation events sometimes requiring conflicting actions and hence it uses an arbitration policy. For example if a weather event occurs during an educational workshop, resources will need to be reallocated and the other workflows paused till the higher priority workflows are serviced. The workflow controller uses pre-determined policies to determine the level of adaptation and dynamism it can respond to without additional intervention. Some adaptation decisions might require external human intervention, for example, if all TeraGrid machines go down at the same time. This multi-level adaptation framework enhances existing grid middleware allowing resource and user workflows to interact to enable a flexible, adaptive, resilient environment that can change to resource variability in conjunction with changing user requirements.

6

Conclusion

The LEAD adaptation framework provides a strong foundation for exploring the effect of complex next-generation workflow characteristics, such as hierarchical workflows, uncertainties in execution path, on resource coordination. Four years into the LEAD project we are considerably closer to realizing the goal of a fully adaptive system. Architecting a system in a multidisciplinary, collaborative academic setting is benefited by the modular nature of a service-oriented architecture. The loosely coupled solutions need only minimally interact. And as the infrastructure begins to take on large numbers of external and educational users, most notably to serve the Nationally Collegiate Forecasting contest issues of reliability and long queue delays become the most immediate and pressing of issues, issues for which the adaptation framework described here is well suited.

Realization of Dynamically Adaptive Weather Analysis and Forecasting

1129

Acknowledgements. The authors thank the remaining LEAD team, including PIs Kelvin Droegemeier (Oklahoma Univ.), Mohan Ramamurthy (Unidata), Dennis Gannon (Indiana Univ.), Sara Graves (Univ. of Alabama Huntsville), Daniel A. Reed (UNC Chapel Hill), and Bob Wilhelmson (NCSA). Author Lavanya Ramakrishnan conducted some of the work while a researcher at UNC Chapel Hill.

References 1. B. Plale, D. Gannon, D. Reed, S. Graves, K. Droegemeier, B. Wilhelmson, M. Ramamurthy. Towards Dynamically Adaptive Weather Analysis and Forecasting in LEAD. In ICCS workshop on Dynamic Data Driven Applications and LNCS, 3515, pp. 624-631, 2005. 2. K. K. Droegemeier, D. Gannon, D. Reed, B. Plale, J. Alameda, T. Baltzer, K. Brewster, R. Clark, B. Domenico, S. Graves, E. Joseph, D. Murray, R. Ramachandran, M. Ramamurthy, L. Ramakrishnan, J. A. Rushing, D. Weber, R. Wilhelmson, A. Wilson, M. Xue and S. Yalda. Service-Oriented Environments for Dynamically Interacting with Mesoscale Weather. Computing in Science and Engineering, 7(6), pp. 12-29, 2005. 3. X. Li, R. Ramachandran, J. Rushing, S. Graves, Kevin Kelleher, S. Lakshmivarahan, and Jason Levit. Mining NEXRAD Radar Data: An investigative study. In American Meteorology Society annual meeting, 2004. 4. K.K. Droegemeier, J. Kurose, D. McLaughlin, B. Philips, M. Preston, S. Sekelsky, J. Brotzge, V. Chandresakar. Distributed collaborative adaptive sensing for hazardous weather detection, tracking, and predicting. In International Conference on Computational Science (ICCS), 2004. 5. Y. Liu, N. Vijayakumar, B. Plale. Stream Processing in Data-driven Computational Science. In 7th IEEE/ACM International Conference on Grid Computing (Grid’06), 2006. 6. Y. L. Simmhan, B. Plale, and D. Gannon. A Framework for Collecting Provenance in Data-Centric Scientific Workflows. In International Conference on Web Services (ICWS), 2006. 7. Y. Huang, A. Slominski, C. Herath and D. Gannon. WS-Messenger: A Web Services-based Messaging System for Service-Oriented Grid Computing. In Cluster Computing and the Grid (CCGrid), 2006 8. Y. L. Simmhan, B. Plale, and D. Gannon. Towards a Quality Model for Effective Data Selection in Collaboratories. In IEEE Workshop on Workflow and Data Flow for Scientific Applications (SciFlow), 2006. 9. A. Mandal, K. Kennedy, C. Koelbel, G. Marin. G, J. Mellor-Crummey, B. Liu and L. Johnsson. “Scheduling Strategies for Mapping Application Workflows onto the Grid”. In IEEE International Symposium on High Performance Distributed Computing, 2005. 10. J. Meyer, On Evaluating the Performability of Degradable Computing Systems. In IEEE Transactions Computers, 1980. 11. L. Ramakrishnan, L. Grit, A. Iamnitchi, D. Irwin, A. Yumerefendi and J. Chase, “Toward a Doctrine of Containment: Grid Hosting with Adaptive Resource Control,” In ACM/IEEE SC 2006 Conference (SC’06), 2006. 12. I. Foster, C. Kesselman (eds), “The Grid 2: Blueprint for a New Computing Infrastructure,” Morgan Kaufmann Publishers Inc, 2003.

Active Learning with Support Vector Machines for Tornado Prediction Theodore B. Trafalis1, Indra Adrianto1, and Michael B. Richman2 1

School of Industrial Engineering, University of Oklahoma, 202 West Boyd St, Room 124, Norman, OK 73019, USA [email protected], [email protected] 2 School of Meteorology, University of Oklahoma, 120 David L. Boren Blvd, Suite 5900, Norman, OK 73072, USA [email protected]

Abstract. In this paper, active learning with support vector machines (SVMs) is applied to the problem of tornado prediction. This method is used to predict which storm-scale circulations yield tornadoes based on the radar derived Mesocyclone Detection Algorithm (MDA) and near-storm environment (NSE) attributes. The main goal of active learning is to choose the instances or data points that are important or have influence to our model to be labeled and included in the training set. We compare this method to passive learning with SVMs where the next instances to be included to the training set are randomly selected. The preliminary results show that active learning can achieve high performance and significantly reduce the size of training set. Keywords: Active learning, support vector machines, tornado prediction, machine learning, weather forecasting.

1 Introduction Most conventional learning methods use static data in the training set to construct a model or classifier. The ability of learning methods to update the model dynamically, using new incoming data, is important. One method that has this ability is active learning. The objective of active learning for classification is to choose the instances or data points to be labeled and included in the training set. In many machine learning tasks, collecting data and/or labeling data to create a training set is costly and timeconsuming. Rather than selecting and labeling data randomly, it is better if we can label the data that are important or have influence to our model or classifier. In tornado prediction, labeling data is considered costly and time consuming since we need to verify which storm-scale circulations produce tornadoes in the ground. The tornado events can be verified from facts in the ground including photographs, videos, damage surveys, and eyewitness reports. Based on tornado verification, we then determine and label which circulations produce tornadoes or not. Therefore, applying active learning for tornado prediction to minimize the need for the instances and use the most informative instances in the training set in order to update the classifier would be beneficial. Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1130–1137, 2007. © Springer-Verlag Berlin Heidelberg 2007

Active Learning with Support Vector Machines for Tornado Prediction

1131

In the literature, the Mesocyclone Detection Algorithm (MDA) attributes [1] derived from Doppler radar velocity data have been used to detect tornado circulations. Marzban and Stumpf [1] applied artificial neural networks (ANNs) to classify MDA detections as tornadic or non-tornadic circulations. Additionally, Lakshmanan et al. [2] used ANNs and added the near-storm environment (NSE) data into the original MDA data set and determined that the skill improved marginally. Application of support vector machines (SVMs) using the same data set used by Marzban and Stumpf [1] has been investigated by Trafalis et al. [3]. Trafalis et al. [3] compared SVMs with other classification methods, such as ANNs and radial basis function networks, concluding that SVMs provided better performance in tornado detection. Moreover, a study by Adrianto et al. [4] revealed that the addition of NSE data into the MDA data can improve performance of the classifiers significantly. However, those experiments in the literature were conducted using static data. In this paper, we investigated the application of active learning with SVMs for tornado prediction using the MDA and NSE data. We also compared this method to passive learning with SVMs using these data where the next instances to be added to the training set are randomly selected.

2 Data and Analysis The original data set was comprised of 23 attributes taken from the MDA algorithm [1]. These attributes measure radar-derived velocity parameters that describe various aspects of the mesocyclone. Subsequently, 59 attributes from the NSE data [2] were incorporated to this data set. The NSE data described the pre-storm environment of the atmosphere on a broader scale than the MDA data, as the MDA attributes are radar-based. Information on wind speed, direction, wind shear, humidity lapse rate and the predisposition of the atmosphere to accelerate air rapidly upward over specific heights were measured by the NSE data. Therefore, the MDA+NSE data consist of 82 attributes.

3 Methodology 3.1 Support Vector Machines The SVM algorithm was developed by Vapnik and has proliferated into a powerful method in machine learning [5-7]. This algorithm has been used in real-world applications and is well known for its superior practical results. In binary classification problems, the SVM algorithm constructs a hyperplane that separates a set of training vectors into two classes (Fig. 1). The objective of SVMs (the primal problem) is to maximize the margin of separation and to minimize the misclassification error. The SVM formulation can be written as follows [8]: min

l 1 2 w + C ∑ξi 2 i =1

subject to y i ( w ⋅ x i + b ) ≥ 1 − ξ i , ξ i ≥ 0, i = 1,..., l

(1)

1132

T.B. Trafalis, I. Adrianto, and M.B. Richman

where w is the weight vector perpendicular to the separating hyperplane, b is the bias of the separating hyperplane, ξi is a slack variable, and C is a user-specified parameter which represents a trade off between generalization and misclassification. Using Lagrange multipliers α, the SVM dual formulation becomes [8]:

max Q (α ) =

l

∑α



i

i =1

l

subject to

∑αy i

i

1 l 2 i =1

l

∑∑α α i

j

yi y j x i x j

j =i

(2)

= 0, 0 ≤ α i ≤ C , i = 1,..., l

i =1

l

The optimal solution of Eq. (1) is given by w = ∑ α i yi x i where α = (α1 ,..., α l ) is the i =1

optimal solution of the optimization problem in Eq. (2). The decision function is defined as: g (x ) = sign ( f (x )) , where f (x ) =

Outside the margin of separation

Inside the margin of separation

w⋅x + b

(3)

Outside the margin of separation

ξ

Misclassification point

Support vectors

Support vectors Class -1

Class 1

w ⋅ x + b = −1 w⋅x + b =1 w⋅x + b = 0 Separating hyperplane Fig. 1. Illustration of support vector machines

For solving nonlinear problems, the SVM algorithm maps the input vector x into a higher-dimensional feature space through some nonlinear mapping Φ and constructs an optimal separating hyperplane [7]. Suppose we map the vector x into a vector in the feature space (Φ1(x),…,Φn(x),…), then an inner product in feature space has an equivalent representation defined through a kernel function K as K(x1,x2) = [8]. Therefore, we can introduce the inner-product kernel as K(xi,xj) = and substitute the dot-product in the dual problem in Eq. (2) with this kernel function. The kernel function used in this study is the radial basis 2 function (RBF) with K(xi,xj) = exp⎛⎜ − γ x i − x j ⎞⎟ where γ is the parameter that con⎝ ⎠ trols the width of RBF.

Active Learning with Support Vector Machines for Tornado Prediction

1133

New unlabeled data, U Batch of unlabeled data

Are instances inside the margin of separation

No

Remove instances

Yes

Classifier

Request correct labels

Update

Update Query function, f(L)

Labeled data, L

Fig. 2. Active learning with SVMs scheme

3.2 Active Learning with SVMs Several active learning algorithms with SVMs have been proposed by Campbell et al. [9], Schohn and Cohn [10], and Tong and Koller [11]. Campbell et al. [9] suggested that the generalization performance of a learning machine can be improved significantly with active learning. Using SVMs, the basic idea of the active learning algorithms is to choose the unlabeled instance for the next query closest to the separating hyperplane in the feature space which is the instance with the smallest margin [9-11]. In this paper, we choose the instances that are inside the margin of separation to be labeled and included in the training set. Since the separating hyperplane lies in the middle of the margin of separation, these instances will have an effect on the solution. Thus, the instances outside the margin of separation will be removed. Suppose we are given an unlabeled pool U and a set of labeled data L. The first step is to find a query function f(L) where, given a set of labeled data L, determine which instances in U to query next. This idea is called the pool-based active learning. Scheme of active learning can be found in Fig. 2.

3.3 Measuring the Quality of the Forecasts for Tornado Prediction In order to measure the performance of a tornado prediction classifier, it is important to compute scalar forecast evaluation scores such as the Critical Success Index (CSI), Probability of Detection (POD), False Alarm Ratio (FAR), Bias, and Heidke Skill Score (HSS), based on a “confusion” matrix or contingency table (Table I). Those skill scores are defined as: CSI = a/(a+b+c), POD = a/(a+c), FAR = b/(a+b), Bias = (a+b)/(a+c), and HSS = 2(ad-bc)/[(a+c)(c+d)+(a+b)(b+d)]. It is important not to rely solely on a forecast evaluation statistic incorporating cell d from the confusion matrix, as tornadoes are rare events with many correct nulls. This is important as there is little usefulness in forecasting “no” tornadoes every day. Indeed, the claim of skill associated with such forecasts including correct nulls for rare events has a notorious history in meteorology [13].The CSI measures the

1134

T.B. Trafalis, I. Adrianto, and M.B. Richman

accuracy of a solution equal to the total number of correct event forecasts (hits) divided by the total number of tornado forecasts plus the number of misses (hits + false alarms + misses) [12]. It has a range of 0 to 1, where 1 is a perfect value. The POD calculates the fraction of observed events that are correctly forecast. It has a perfect score of 1 and a range is 0 to 1 [14]. The FAR measures the ratio of false alarms to the number of “yes” forecasts. It has a perfect score of 0 with its range of 0 to 1 [14]. The Bias computes the total number of event forecasts (hits + false alarms) divided by the total number of observed events. It shows whether the forecast system is underforecast (Bias < 1) or overforecast (Bias > 1) events with a range of 0 to +∞ and perfect score of 1 [14]. The HSS [15] is commonly used in forecasting since it considers all elements in the confusion matrix. It measures the relative increase in forecast accuracy over some reference forecast. In the present formulation, the reference forecast is a random guess. A skill value > 0 is more accurate than the reference. It has a perfect score of 1 and a range of -1 to 1. Table 1. Confusion matrix

Yes Forecast No

Observation Yes No hit false alarm a b miss correct null c d

4 Experiments The data were divided into two sets: training and testing. In the training set, we had 382 tornadic instances and 1128 non-tornadic instances. In order to perform online setting experiments, the training instances were arranged in time order. The testing set consisted of 387 tornadic instances and 11872 non-tornadic instances. For both active and passive learning experiments, the initial training set was the first 10 instances consisted of 5 tornadic instances and 5 non-tornadic instances. At each iteration, new data were injected in a batch of several instances. Two different batch sizes, 75 and 150 instances, were used for comparison. In passive learning with SVMs, all incoming data were labeled and included in the training set. Conversely, active learning with SVMs only chooses the instances from each batch which are most informative for the classifier. Therefore, the classifier was updated dynamically at each iteration. The performance of the classifier can be measured by computing the scalar skill scores (Section 3.3) on the testing set. The radial basis function kernel with γ = 0.01 and C = 10 was used in these experiments. The experiments were performed in the Matlab environment using LIBSVM toolbox [16]. Before training a classifier, the data set needs to be normalized. We normalized the training set so that each attribute has the mean of 0 and the standard deviation of 1. Then, we used the mean and standard deviation from each attribute in the training set to normalize each attribute in the testing set.

Active Learning with Support Vector Machines for Tornado Prediction

1135

Fig. 3. (a) The results of CSI, POD, FAR, Bias, and HSS on the testing set using active and passive learning at all iterations. (b) The last iteration results with 95% confidence intervals on the testing set.

1136

T.B. Trafalis, I. Adrianto, and M.B. Richman

5 Results It can bee seen from Fig. 3a for all skill scores, CSI, POD, FAR, Bias, and HSS, active learning achieved relatively the same scores as passive learning using less training instances. From the FAR diagram (Fig. 3a), we noticed that at early iteration the active and passive learning FAR with the batch size of 75 dropped suddenly. It happened because the forecast system was underforecast (Bias < 1) at that stage. Ultimately, every method produced overforecasting. Furthermore, Fig. 3b showed the last iteration results with 95% confidence intervals after conducting bootstrap resampling with 1000 replications [17]. The 95% confidence intervals between active and passive learning results with the batch sizes of 75 and 150 overlapped each other for each skill score, so the differences were not statistically significant. These results indicated that active learning possessed similar performance compared to passive learning using the MDA and NSE data set. The results in Fig. 4 showed that active learning significantly reduced the training set size to attain relatively the same skill scores as passive learning. Using the batch size of 75 instances, only 571 labeled instances were required in active learning whereas in passive learning 1510 labeled instances were needed (Fig. 4a). This experiment reveals that about 62.6% reduction was realized by active learning. Using the batch size of 150 instances, active learning can reduce the training set size by 60.5% since it only needed 596 labeled instances whereas passive learning required 1510 labeled instances (Fig. 4b).

Fig. 4. Diagrams of training set size vs. iteration for the batch sizes of (a) 75 and (b) 150 instances

6 Conclusions In this paper, active learning with SVMs was used to discriminate between mesocyclones that do not become tornadic from those that do form tornadoes. The preliminary results showed that active learning can significantly reduce the size of training set and achieve relatively similar skill scores compared to passive learning. Since labeling new data is considered costly and time consuming in tornado prediction, active learning would be beneficial in order to update the classifier dynamically.

Active Learning with Support Vector Machines for Tornado Prediction

1137

Acknowledgments. Funding for this research was provided under the National Science Foundation Grant EIA-0205628 and NOAA Grant NA17RJ1227.

References 1. Marzban, C., Stumpf, G.: A neural network for tornado prediction based on Doppler radarderived attributes. J. Appl. Meteorol. 35 (1996) 617-626 2. Lakshmanan, V., Stumpf, G., Witt, A.: A neural network for detecting and diagnosing tornadic circulations using the mesocyclone detection and near storm environment algorithms. In: 21st International Conference on Information Processing Systems, San Diego, CA, Amer. Meteor. Soc. (2005) CD–ROM J5.2 3. Trafalis, T.B., Ince, H., Richman M.B. Tornado detection with support vector machines. In: Sloot PM et al. (eds). Computational Science-ICCS (2003) 202-211 4. Adrianto, I., Trafalis, T.B., Richman, M.B., Lakshmivarahan, S., Park, J.: Machine learning classifiers for tornado detection: sensitivity analysis on tornado data sets. In: Dagli C. Buczak, A., Enke, D., Embrechts, M., Ersoy, O. (eds.): Intelligent Engineering Systems Through Artificial Neural Networks, Vol. 16. ASME Press (2006) 679-684 5. Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Haussler D (ed): 5th Annual ACM Workshop on COLT. ACM Press, Pittsburgh, PA (1992) 144-152 6. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer Verlag, New York (1995) 7. Vapnik, V.N.: Statistical Learning Theory. Springer Verlag, New York (1998) 8. Haykin S.: Neural Networks: A Comprehensive Foundation. 2nd edn. Prentice Hall, New Jersey (1999) 9. Campbell, C., Cristianini, N., Smola, A.: Query learning with large margin classifiers. In: Proceedings of ICML-2000, 17th International Conference on Machine Learning. (2000)111-118 10. Schohn, G., Cohn, D.: Less is more: Active learning with support vector machines. In: ICML Proceedings of ICML-2000, 17th International Conference on Machine Learning, (2000) 839-846 11. Tong, S., Koller, D.: Support vector machine active learning with applications to text classification. J. Mach. Learn. Res. 2 (2001) 45-66 12. Donaldson, R., Dyer, R., Krauss, M.: An objective evaluator of techniques for predicting severe weather events. In: 9th Conference on Severe Local Storms, Norman, OK, Amer. Meteor. Soc. (1975) 321-326 13. Murphy, A.H.: The Finley affair: a signal event in the history of forecast verifications. Weather Forecast. 11 (1996) 3-20 14. Wilks, D.: Statistical Methods in Atmospheric Sciences. Academic Press, San Diego, CA (1995) 15. Heidke P.: Berechnung des erfolges und der gute der windstarkvorhersagen im sturmwarnungsdienst, Geogr. Ann. 8 (1926) 301-349 16. Chang, C., Lin, C.: LIBSVM: a library for support vector machines. Software available at (2001) 17. Efron, B., Tibshirani, R.J.: An Introduction to the Bootstrap. Chapman & Hall, New York (1993)

Adaptive Observation Strategies for Forecast Error Minimization Nicholas Roy1 , Han-Lim Choi2 , Daniel Gombos3 , James Hansen4 , Jonathan How2 , and Sooho Park1 1

Computer Science and Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 2 Aerospace Controls Lab Massachusetts Institute of Technology Cambridge, MA 02139 3 Department of Earth and Planetary Sciences Massachusetts Institute of Technology Cambridge, MA 02139 4 Marine Meteorology Division Naval Research Laboratory Monterey, CA 93943

Abstract. Using a scenario of multiple mobile observing platforms (UAVs) measuring weather variables in distributed regions of the Pacific, we are developing algorithms that will lead to improved forecasting of high-impact weather events. We combine technologies from the nonlinear weather prediction and planning/control communities to create a close link between model predictions and observed measurements, choosing future measurements that minimize the expected forecast error under time-varying conditions. We have approached the problem on three fronts. We have developed an information-theoretic algorithm for selecting environment measurements in a computationally effective way. This algorithm determines the best discrete locations and times to take additional measurement for reducing the forecast uncertainty in the region of interest while considering the mobility of the sensor platforms. Our second algorithm learns to use past experience in predicting good routes to travel between measurements. Experiments show that these approaches work well on idealized models of weather patterns.

1 Introduction Recent advances in numerical weather prediction (NWP) models have greatly improved the computational tractability of long-range prediction accuracy. However, the inherent sensitivity of these models to their initial conditions has further increased the need for accurate and precise measurements of the environmental conditions. Deploying an extensive mobile observation network is likely to be costly, and measurements of the current conditions may produce different results in terms of improving forecast performance [1,2]. These facts have led to the development of observation strategies where additional sensors are deployed to achieve the best performance according to some Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1138–1146, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Adaptive Observation Strategies for Forecast Error Minimization

1139

measures such as expected forecast error reduction and uncertainty reduction [3]. One method for augmenting a fixed sensor network is through the use of “adaptive” or “targeted” observations where mobile observing platforms are directed to areas where observations are expected to maximally reduce forecast error under some norm (see, for example, NOAA’s Winter Storm Reconnaissance Program [4]). The hypothesis is that these directed measurements provide better inputs to the weather forecasting system than random or gridded use of the observing assets. This paper describes an adaptive observation strategy that integrates nonlinear weather prediction, planning and control to create a close link between model predictions and observed measurements, choosing future measurements that minimize the expected forecast error under time-varying conditions. The main result will be a new framework for coordinating a team of mobile observing assets that provides more efficient measurement strategies and a more accurate means of capturing spatial correlations in the system dynamics, which will have broad applicability to measurement and prediction in other domains. We first describe the specific non-linear weather prediction model used to develop our adaptive observation strategy, and then describe a global targeting algorithm and a local path planner that together choose measurements to minimize the expected forecast error.

2 Models of Non-linear Weather Prediction While there exist large-scale realistic models of weather prediction such as the Navy’s Coupled Ocean Atmosphere Prediction System (COAMPS), our attention will be restricted to reduced models in order to allow computationally tractable experiments with different adaptive measurement strategies. The Lorenz-2003 model is an extended model of the Lorenz-95 model [1] to address multi-scale feature of the weather dynamics in addition to the basic aspects of the weather motion such as energy dissipation, advection, and external forcing. In this paper, the original one-dimensional model is extended to two-dimensions representing the mid-latitude region (20 − 70 deg) of the northern hemisphere. The system equations are y˙ ij = − ξi−2α,j ξi−α,j +

− μηi,j−2β ηi,j−β

1 2α/2 + 1



k=+α/2

ξi−α+k,j yi+k,j

k=−α/2

μ + 2β/2 + 1



k=+β/2

ηi,j−β+k yi,i+k

(1)

k=−β/2

− yij + F where ξij =

1 2α/2 + 1



k=+α/2

k=−α/2

yi+k,j ,

ηij =

1 2β/2 + 1



k=+α/2

yi,j+k ,

(2)

k=−β/2

where i = 1, . . . , Lon , j = 1, . . . , Lat . The subscript i denotes the west-to-eastern grid index, while j denotes the south-to-north grid index. The dynamics of the (i, j)-th grid

1140

N. Roy et al.

point depends on its longitudinal 2α-interval neighbors (and latitudinal 2β) through the advection terms, on itself by the dissipation term, and on the external forcing (F = 8 in this work). When α = β = 1, this model reduces to the two-dimension Lorenz-95 model [3]. The length-scale of this model is proportional to the inverse of α and β in each direction: for instance, the grid size for α = β = 2 amounts to 347 km × 347 km. The time-scale is such that 0.05 time units are equivalent to 6 hours in real-time. 2.1 State Estimation A standard approach to state estimation and prediction is to use a Monte Carlo (ensemble) approximation to the extended Kalman Filter, in which each ensemble member presents an initial state estimate of the weather system. These ensembles are propagated (for a set forecast time) through the underlying weather dynamics and the estimate (i.e., the mean value of these ensembles) is refined by measurements (i.e., updates) that are available through the sensor network. The particular approximation used in this work is the sequential ensemble square root filter [5] (EnSRF). In the EnSRF, the propagation of the mean state estimate and covariance matrix amounts to a nonlinear integration of ensemble members, improving the filtering of non-linearities compared to standard EKF techniques and mitigating the computational burden of maintaining a large covariance matrix [6,5]. The ensemble mean corresponds to the state estimate, and the covariance information can be obtained from the perturbation ensemble, ˜X ˜ T /(LE − 1), P=X

˜ ∈ RLS ×LE X

(3)

˜ is the perturwhere LS is the number of state variables and LE is the ensemble size. X bation ensemble defined as   ˜ =η X−x ¯ × 1T (4) X ¯ where X is the ensemble matrix, a row concatenation of each ensemble member, and x is the ensemble mean, the row average of the ensemble matrix. η( ≥ 1) is the covariance inflation factor introduced to avoid underestimation of the covariance by finite ensemble size. The propagation step for EnSRF is the integration 

t+Δt

˙ Xdt,

Xf (t + Δt) =

X(t) = Xa (t),

(5)

t

with Xf and Xa denoting the forecast and analysis ensemble, respectively. The measurement update step for the EnSRF is ¯a = x ¯ f + K(y − H¯ x xf ) ˜ a = (I − KH)X ˜f X

(6) (7)

where y denotes the observation vector and H is the linearized observation matrix. K denotes the appropriate Kalman gain, which can be obtained by solving a nonlinear matrix equation stated in terms of X[5]. The sequential update process avoids

Adaptive Observation Strategies for Forecast Error Minimization

1141

solving a nonlinear matrix equation and provides a faster method for determining K. The ensemble update by the m-th observation is ˜m ˜ m+1 = X ˜ m − αm βm pm X i ξi ,    αm = 1/ 1 + βm Ri , βm = 1/ (Pm ii + Ri )

(8) (9)

m ˜m where measurement is taken for i-th state variable. pm i , ξi , and Pii are the i-th column, the i-th row, and the (i, i) element of the prior perturbation matrix Pm respectively. αm is the factor for compensating the mismatch of the serial update and the batch update, while βm pm l amounts to the Kalman gain. Figure 1(a) shows an example true state of a Lorenz model (top) over the 36 × 9 state variables. The bottom frame shows the estimated state at the same time. This estimate is computed from an EnSRF using 200 ensemble members. Observations are taken at 66 fixed (routine) locations represented by blue circles; note that there are regions where routine observations are sparse, representing areas such as open ocean where regular measurements are hard to acquire. Figure 1(b) (top) shows the squared analysis error between true state and ensemble estimates from the upper figure, that is, the actual forecast error. The lower panel shows the ensemble variance, that is, the expected squared forecast error. Note that the expected and true error are largely correlated; using 200 ensemble members was enough to estimate the true model with reasonable error as shown in the figures.

(a) True vs. Estimated State

(b) Performance Analysis

Fig. 1. (a) Top panel: the true state of the Lorenz system, where the intensity correlates with the state value. Lower panel: The estimated state of the system, using 200 ensemble members. (b) Top panel: the actual forecast error. Lower panel: the ensemble variance.

For the purposes of forecast error, we are typically interested in improving the forecast accuracy for some small region such as the coast of California, rather than the entire Pacific. A verification region is specified as X[v] and verification time tv in our experiments, as shown by the red squares in Figure 1. Our goal is therefore to choose measurements of X at time t to minimize the forecast error at X[v] at time tv .

N. Roy et al. o

o 5W

0

15

16

120 o W

70 o N

Path 1

X[v] Space

W

13 5o W

1142

60 o N

Path i 105 o W

50 o N

Path n

40 o N

t0

t1 ... tk ... tK

tV

20 o N

30 o N

Tim e

(a) Targeting in grid space-time

(b) Four example targeting plans

Fig. 2. (a) Multi-UAV targeting in the grid space-time. (b) Targeting of four sensor platforms for the purpose of reducing the uncertainty of 3-day forecast over the west coast of North America.

3 A Targeting Algorithm for Multiple Mobile Sensor Platforms The targeting problem is how to assign multiple sensor platforms (e.g. UAVs) to positions in the finite grid space-time (Figure 2a) in order to reduce the expected forecast uncertainty in the region of interest X[v]. We define the targeting problem as selecting n paths consisting of K (size of the targeting time window) points that maximize the information gain at X[v] of the measurements taken along the selected paths. A new, computationally efficient backward selection algorithm forms the backbone of the targeting approach. To address the computation resulting from the expense of determining the impact of each measurement choice on the uncertainty reduction in the verification site, the backward selection algorithm exploits the commutativity of mutual information. This enables the contribution of each measurement choice to be computed by propagating information backwards from the verification space/time to the search space/time. This significantly reduces the number of times that computationally expensive covariance updates must be performed. In addition, the proposed targeting algorithm employs a branch-and-bound search technique to reduce computation required to calculate payoffs for suboptimal candidates, utilizing a simple cost-to-go heuristics based on the diagonal assumption of the covariance matrix that provides an approximate upper bound of the actual information gain. The suggested heuristic does not guarantee an optimal solution; nevertheless, in practice it results in a substantial reduction in computation time while incurring minimal loss of optimality, which can be improved by relaxing a bounding constraint. Figure 2(b) depicts an illustrative solution of the four-agent (black ♦, , , ∗) targeting problem for enhancing the 3-day forecast of the west coast of North America (red ); the time interval between the marks is three hours. The computation time of the targeting algorithm grows exponentially with the number of sensor platforms and the size of the targeting window increase, in spite of the reduction in computational cost. Thus, further approximations that decompose the computation and decision making into different topologies and choices on the planning horizon will be explored. These have been shown to avoid the combinatorial explosion of the computation time, and the performance of the approximation scheme turns

Adaptive Observation Strategies for Forecast Error Minimization

1143

out to depend highly on the communication topology between agents. The existence of inter-agent sharing of the up-to-date covariance information has also been shown to be essential to achieve performance.

4 Trajectory Learning Given a series of desired target locations for additional measurements, an appropriate motion trajectory must be chosen between each pair of locations. Rather than directly optimizing the trajectory based on the current state of the weather system, the system will learn to predict the best trajectory that minimizes the forecast by examining past example trajectories. The advantage to this approach is that, once the predictive model is learned, each prediction can be made extremely quickly and adapted in real time as additional measurements are taken along the trajectory. The second advantage is that by careful selecting the learning technique, a large number of factors can be considered in both the weather system and the objective function, essentially optimizing against a number of different objectives, again without incurring a large computational penalty. The problem of learning a model that minimizes the predicted forecast error is that of reinforcement learning, in which an agent takes actions and receives some reward signal. The goal of the agent is to maximize over its lifetime the expected received reward (or minimize the received cost) by learning to associate actions that maximize reward in different states. Reinforcement learning algorithms allow the agent to learn a policy π : x → a, mapping state x to action a in order to maximize the reward. In the weather domain, our cost function is the norm of the forecast error at the verification state variables (X[v]) at the verification time tv , so that the optimal policy π ∗ is





˜ (10) π ∗ (X) = argmin EXtv [v] (X tv [v]|h(π), X) − Xtv [v]

π∈Π

If our set of actions is chosen to be a class of paths through space, such as polynomial splines interpolating the target points, then the policy attempts to choose the best spline to minimize our expected forecast error. Notice that this policy maps the current state X to the action a; however, the policy does not have access to the current weather state but only the current estimate of the weather given by the EnSRF. The learner therefore computes the policy that chooses actions based on the current estimate given by the ˜ and covariance Σ of the ensemble. mean X In order to find the optimal policy π ∗ , a conventional reinforcement learning algorithm spends time trying different trajectories under different examples of weather conditions, and modelling how each trajectory predicts a different forecast error. The ˜ and learning problem then becomes one of predicting, for a given EnSRF estimate X Σ, the expected forecast error ξ ∈ R for each possible trajectory a ∈ A: ˜ Σ) × A → R. (X,

(11)

Once this functional relationship is established, the controller simply examines the predicted error ξ for each action a given the current state estimate and chooses the action with the least error.

1144

N. Roy et al.

With access to a weather simulator such as the Lorenz model, we can simplify this learning problem by turning our reinforcement learning problem into a “supervised” learning problem, where the goal of the learner is not to predict the forecast error ξ of each possible trajectory conditioned on the current weather estimate, but rather to predict the best trajectory, a converting the regression problem of equation (11) into a classification problem, that is, ˜ Σ) → A. (X, (12) Although regression and classification are closely linked (and one can often be written in terms of another), we can take advantage of some well-understood classification algorithms for computing policies. The classification algorithm used is the multi-class Support Vector Machine [7], which assigns a label (i.e., our optimal action) to each initial condition. The SVM is a good choice to learn our policy for two reasons: firstly, the ˜ Σ). Secondly, SVM allows us to learn a classifier over the continuous state space (X, the SVM is generally an efficient learner of large input spaces with a small number of samples; the SVM uses a technique known as the “kernel trick” [7] to perform classification by projecting each instance to a high-dimensional, non-linear space in which the inputs are linearly separable according to their class label. 4.1 Experimental Results Training data for the Lorenz model was created by randomly generating initial conditions of the model, creating a set of ensemble members from random perturbations to the initial conditions and then propagating the model. Figure 3(a) shows a plot of 40 initial conditions used as training data created by running the model forward for several days and sampling a new initial condition every 6 hours, re-initializing the model every 5 days. Each row corresponds to a different training datum Xt , and each column corresponds to a state variable X i . While the data are fairly random, the learner can take advantage of the considerable temporal correlation; notice the clear discontinuity in the middle of the data where the model was re-initialized.

(a) Training Data

(b) Example trajectories

Fig. 3. (a) A plot of 40 training instances X. Each column corresponds to a state variable X i and each row is a different Xt . (b) Three example trajectories from our action space A. Our action space consisted of 5 trajectories that span the region from the same start and end locations.

Each training instance was labelled with the corresponding optimal trajectory. We restricted the learner to a limited set of 5 actions or candidate trajectories, although

Adaptive Observation Strategies for Forecast Error Minimization

1145

this constraint will be relaxed in future work. All candidate trajectories started from the same mid-point of the left side of the region and ended at the same mid-point of the right side of the region. Three examples of the five trajectories are shown in Figure 3(b); the trajectories were chosen to be maximal distinct through the area of sparse routine observations in the centre of the region. From the initial condition, the model and ensemble were propagated for each trajectory. During this propagation, routine observations were taken every 5 time units and then a forecast was generated by propagating the ensemble for time equivalent to 2 and 4 days, without taking additional observations. The forecast error was then calculated by the difference between the ensemble estimate and the true value of the variables of the verification region. Each initial condition was labelled with the trajectory that minimized the resultant forecast error.

1.8 worst−best median−best svm−best

1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0

(a) Forecast Error

0

50

100

150

200

(b) Forecast Error Loss

Fig. 4. (a) Forecast error at the verification region after 2 days for the 200 largest losses in test data, sorted from least to greatest. (b) Forecast error loss, where the loss is taken with respect to the forecast error of the best trajectory.

Figure 4(a) shows the forecast error of best, median, worst and SVM trajectory for the 200 most difficult (highest forecast error) initial conditions in terms of the forecast error in the verification region. Notice that the forecast error of the SVM trajectory tracks the best trajectory relatively closely, indicating good performance. Figure 4(b) is an explicit comparison between the worst, median and SVM trajectories compared to the best trajectory for the same 200 most difficult training instances. Again, the SVM has relatively little loss (as measured by the difference between the forecast error of the SVM and the forecast error of the best trajectory) for many of these difficult cases. In training the learner, two different kernel (non-linear projections in the SVM) were tested, specifically polynomial and radial basis function (RBF) kernels. Using crossvalidation and a well-studied search method to identify the best kernel fit and size, a surprising result was that a low-order polynomial kernel resulted in the most accurate prediction of good trajectories. A second surprising result is that in testing different combinations of input data, such as filter mean alone, compared to filter mean and filter covariance, the filter covariance had relatively little effect on the SVM performance. This effect may be related to the restricted action class, but further investigation is warranted.

1146

N. Roy et al.

5 Conclusion The spatio-temporal character of the data and chaotic behavior of the weather model makes adaptive observation problem challenging in the weather domain. We have described two adaptive observation techniques, including a targeting algorithm and a learning path planning algorithm. In the future, we plan to extend these results using the Navy’s Coupled Ocean Atmosphere Prediction System (COAMPS), a full-scale regional weather research and forecasting model.

References 1. Lorenz, E.N., Emanuel, K.A.: Optimal sites for supplementary weather observations: Simulation with a small model. Journal of the Atmospheric Sciences 55(3) (1998) 399–414 2. Morss, R., Emanuel, K., Snyder, C.: Idealized adaptive observation strategies for improving numerical weather prediction. Journal of the Atmospheric Sciences 58(2) (2001) 3. Choi, H.L., How, J., Hansen, J.: Ensemble-based adaptive targeting of mobile sensor networks. In: Proc. of the American Control Conference (ACC). (To appear. 2007) 4. : http://www.aoc.noaa.gov/article winterstorm.htm. Available online (last accessed June 2005) 5. Whitaker, J., Hamill, H.: Ensemble data assimilation without perturbed observations. Monthly Weather Review 130(7) (2002) 1913–1924 6. Evensen, G., van Leeuwen, P.: Assimilation of altimeter data for the agulhas current using the ensemble kalman filter with a quasigeostrophic model. Monthly Weather Review 123(1) (1996) 85–96 7. Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cambridge University Press, Cambridge, UK (2000)

Two Extensions of Data Assimilation by Field Alignment Sai Ravela Earth, Atmospheric and Planetary Sciences Massachusetts Institute of Technology [email protected]

Abstract. Classical formulations of data-assimilation perform poorly when forecast locations of weather systems are displaced from their observations. They compensate position errors by adjusting amplitudes, which can produce unacceptably “distorted” states. Motivated by cyclones, in earlier work we show a new method for handling position and amplitude errors using a single variational objective. The solution could be used with either ensemble or deterministic methods. In this paper, extension of this work in two directions is reported. First, the methodology is extended to multivariate fields commonly used in models, thus making this method readily applicable. Second, an application of this methodology to rainfall modeling is presented.

1 Introduction Environmental data assimilation is the methodology for combining imperfect model predictions with uncertain data in a way that acknowledges their respective uncertainties. It plays a fundamental role in DDDAS. However, data assimilation can only work when the estimation process properly represents all sources of error. The difficulties created by improperly represented error are particularly apparent in mesoscale meteorological phenomena such as thunderstorms, squall-lines, hurricanes, precipitation, and fronts. Errors in mesoscale models can arise in many ways but they often manifest themselves as errors in the position. We typically cannot attribute position error to a single or even a small number of sources and it is likely that the position errors are the aggregate result of errors in parameter values, initial conditions, boundary conditions and others. In the context of cyclones operational forecasters resort to ad hoc procedures such as bogussing [4] . A more sophisticated alternative is to use data assimilation methods. Unfortunately, sequential [10], ensemble-based [9] and variational [12,3] state estimation methods used in data assimilation applications adjust amplitudes to deal with position error. Adjusting amplitudes doesn’t really fix position error, and instead, can produce unacceptably distorted estimates. In earlier work [16], we show how the values predicted at model grid points can be adjusted in amplitude and moved in space in order to achieve a better fit to observations. The solution is general and applies equally well to meteorological, oceanographic, and 

This material is supported NSF CNS 0540259.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1147–1154, 2007. c Springer-Verlag Berlin Heidelberg 2007 

1148

S. Ravela

hydrological applications. It involves solving two equations, in sequence, described as follows: Let X = X(r) = {X[rT1 ] . . . X[rTm ]} be the model-state vector defined over a spatially discretized computational grid Ω, and rT = {r i = (xi , yi )T , i ∈ Ω} be the position indices. Similarly, let q be a vector of displacements. That is, qT = {qi = (Δxi , Δyi )T , i ∈ Ω}, and X(r − q) represents displacement of X by q. The displacement field q is real-valued, so X(r − q) must be evaluated by interpolation if necessary. We wish to find (X, q) that has the maximum a posteriori probability in the distribution P (X, q|Y ), where Y is the observation vector. Using Bayes rule we obtain P (X, q|Y ) ∝ P (Y |X, q)P (X|q)P (q). Assume a linear observation model with uncorrelated noise in space and time, the component densities to be Gaussian and the displacement field solution is smooth and non-divergent. Then, the following EulerLagrange equations are obtained and solved sequentially. 1) Alignment. Define p = r − q and the alignment equation is then written at each grid node i as:   w1 ∇2 q i + w2 ∇(∇ · q i ) + ∇X T |p H T R−1 (H [X (p)] − Y ) i = 0 (1) Equation 1 introduces a forcing based on the residual between fields. The constraints on the displacement field allow the forcing to propagate to a consistent solution. Equation 1 is non-linear, and is solved iteratively. 2) Amplitude Adjustment: The aligned field X(ˆ p) is used in the second step for a classical Kalman update: ˆ p) = X(ˆ X(ˆ p) + BQˆ H T (HBQˆ H T + R)−1 (Y − H X(ˆ p))

(2)

The covariance BQˆ is computed after alignment. It can be estimated using ensembles (when each has been aligned), or any method that would otherwise be used on the field X.

2 Multivariate Formulation To be applicable in practice, the field alignment algorithm must be extended to the multivariate case, including vector fields. As an example, consider 2D fields (3D-fields can be handled analogously), say two components of velocity and pressure. Now partition the state X and observations Y into component fields, P, U, V for pressure. To align X to Y , we constrain individual fields to have the same displacements. Displacing vector fields, however, involves the Jacobian of the deformation ( when the wind field rotates, both the coordinate and wind-field components rotate). Therefore, if ψ is a scalar function undergoing a deformation ψ(r − q), then the gradient vector field undergoes a transformation that is expressed as (∇q)T ∇ψ|r−q . We introduce, therefore, for the ˜ , V˜ , defined as wind velocity components, the variables U     Ui U˜i T = (∇q i ) (3) Vi V˜i

Two Extensions of Data Assimilation by Field Alignment

1149

The field alignment equation now looks as follows:   w1 ∇2 q i + w2 ∇(∇ · q i ) + [∇P T |r−q HP T RP−1 (HP P (r − q) − YP ) i

 ˜ T |r−q HU T R−1 HU U ˜ (r − q) − YU + [∇U U

i  −1 T T + [∇V˜ |r−q HV RV HV V˜ (r − q) − YV =0 i

Here R is the observation noise covariance, HP , HU , HV are the observation operators,and YU , YV , YP are the component fields of the observation vector. We demonstrate the use of the 2-step method on multivariate fields next. Example: An example is now demonstrated using pressure and velocity. Figures 1 contain “filled” iso-contour plots of state fields. The first guess fields shown down the first column is rotated by 40o from truth (shown in second column). Measurements are made at every 5th pixel on truth, marked in white dots or black crosses in third column. There is an amplitude error of a multiplicative factor 1.1, the observational noise, 30% of peak amplitude in pressure, and 7% in velocity components, i.i.d. The background error covariance (modeled isotropic, see [16] for flow dependent results) in this example is substantially more uncertain than the observational uncertainty. The fourth column of Figure 1 depicts the analysis of 3DVAR-only for pressure and velocity components. The pressure and velocity fields are clearly “smeared”. In contrast, the rightmost column of Figure 1 depicts a much better analysis when field alignment is used as the preprocessing step to 3DVAR. Note that the first guess and observations fed to 3DVAR-only and field alignment algorithm is the same. In Figure 2, using the nonlinear balance equation, we compare the analyzed and diagnosed pressure fields to study how well-balanced alignment is. Result: the alignment followed by 3DVAR preserves the balance to a far greater degree than 3DVAR alone, see Figure 2. Figure 3 compares analysis and balance errors for various cases of sparsity. The xaxis of the left panel depicts sparsity; 1 implies every location was observed, 10 implies every 10th location was observed. The y-axis represents normalized error, normalized by the maximum pressure differential. The left panel’s bar charts contain filled bars comparing the analyzed pressure vs. truth, using open bars for 3DVAR and closed ones for field alignment followed by 3DVAR. We can see that as the observations get more sparse, 3DVAR performance degrades more sharply than a field alignment preprocessed version. The right panel of Figure 3 compares the analyzed and diagnosed pressure, with the x-axis representing sparsity and the y-axis, normalized error. The differences rise sharply for 3DVAR-only and, in fact, after about a sparsity of 6 pixels, 3DVAR-only breaks down. The analysis does not compensate for position error and this is clearly seen in the analysis errors shown in the right panel corresponding to this case. Therefore, although the diagnosed and analyzed pressure fields in 3DVAR-only find themselves in good agreement, they are quite far away from the truth! In contrast, compensating position error using field alignment yields analysis and balance errors that are much smaller. They do grow, but much more slowly as function of observational sparsity.

1150

S. Ravela

Fig. 1. The left column of panels is the first guess, the second column truth, the third shows observations, taken at indicated locations, the fourth shows 3DVAR-only analysis and the rightmost shows field alignment followed by 3DVAR with identical truth, first guess and observations. The top row corresponds to pressure, the second row to U component of velocity and the third column is the V component.

3DVAR Analyzed Pressure

3DVAR Diagnosed Pressure

FA+3DVAR Analyzed Pressure

FA+3DVAR Diagnosed Pressure

Fig. 2. This figure depicts a comparison of balance between 3DVAR and our approach. The analyzed and diagnosed 3DVAR pressures (top two panels) are substantially different than the corresponding pressure fields using 3DVAR after alignment.

3 Application to Velocimetry In contrast to hurricanes, we apply the methodology to rainfall modeling. Rainfall models broadly fall into two categories. The first is a meteorological or the quantitative precipitation forecasting model, such as Mesoscale Model (MM5) [2], the stepmountain Eta coordinate model [1], and the Regional Atmospheric Modeling System (RAMS) [5], etc. The second type is the spatiotemporal stochastic rainfall model. It aims to summarize the spatial and temporal characteristics of rainfall by a small set of

Two Extensions of Data Assimilation by Field Alignment Analyzed Pressure vs. Truth

Analyzed vs. Diagnosed Pressure 0.025

0.04

0.02

0.03

0.015 Error

0.05

Error

1151

0.02

0.01

0.01

0.005

0

1 2 3 4 5 6 7 8 9 10 Sparsity

0

1 2 3 4 5 6 7 8 9 10 Sparsity

Fig. 3. The x-axis of these graphs represents sparsity. The y-axis of the left panel shows the normalized error between the analyzed pressure and truth, and the right panel shows the normalized error between analyzed and diagnosed pressure. The filled bars depict the 3DVAR-only case, and the open bars are for field alignment followed by 3DVAR.

Fig. 4. CIMSS Winds derived from GOES data at 2006-04-06-09Z (left) and pressure (right). The velocity vectors are sparse and contain significant divergence.

parameters [6]. This type of model usually simulates the birth and decay of rain-cells and evolve them through space and time using simple physical descriptions. Despite significant differences among these rainfall models, the concept of propagating rainfall through space and time are relatively similar. The major ingredient required to advect rainfall is a velocity field. Large spatial-scale (synoptic) winds are inappropriate for this purpose for a variety of reasons. Ironically, synoptic observations can be sparse to be used directly and although synoptic-scale wind analyses produced from them (and models) do produce dense spatial estimates, such estimates often do not contain variability at the meso-scales of interest. The motion of mesoscale convective activity is a natural source for velocimetry. Indeed, there exist products that deduce “winds” by estimating the motion of temperature, vapor and other fields evolving in time [7,8].

1152

S. Ravela

Fig. 5. Deriving velocimetry information from satellite observations, Nexrad (top), GOES (bottom). See text for more information.

Two Extensions of Data Assimilation by Field Alignment

1153

In this paper, we present an algorithm for velocimetry from observed motion from satellite observations such as GOES, AMSU, TRMM, or radar data such as NOWRAD. This is obtained by the direct application of equation 1 to two time separated images. This approach provides marked improvement over other methods in conventional use. In contrast to correlation based approaches used for deriving velocity from GOES imagery, the displacement fields are dense, quality control is implicit, and higher-order and smallscale deformations can be easily handled. In contrast with optic-flow algorithms [13,11], we can produce solutions at large separations of mesoscale features, at large time-steps or where the deformation is rapidly evolving. A detailed discussion is presented in [15]. Example: The performance of this algorithm is illustrated in a velocimetry computation. To compare, we use CIMSS wind-data satellite data [8], depicted in Figure 4 obtained from CIMSS analysis on 2006-06-04 at 09Z. CIMSS wind-data is shown over the US great plains, and were obtained from the sounder. The red dots indicate the original location of the data. The left subplot shows wind speed (in degree/hr). The right ones show pressure, and the location of raw measurements in red. It can be seen in the map in Figure 4 that the operational method to produce winds generate sparse vectors and, further, has substantial divergence. Considering the lengthscales, this isn’t turbulence and wind vectors are more likely the result of weak quality control. A more detailed discussion is presented in [15]. In contrast, our method produces dense flow fields, and quality control is implicit from regularization constraints. Figure 5(a,b) shows a pair of NOWRAD images at 2006-06-01-0800Z and 2006-06-01-0900Z respectively, and the computed flow field in Figure 5(c). Similarly, Figure 5(d,e,f) show the GOES images and velocity from the same time frame over the deep convective rainfall region in the Great Plains example. The velocities are in good agreement with CIMSS derived winds where magnitudes are concerned, but the flow-fields are smooth and visual confirmation of the alignment provides convincing evidence that they are correct.

4 Discussion and Conclusions The joint position-amplitude assimilation approach is applicable to fields with coherent structures. Thus, problems in reservoir modeling, convection, rainfall modeling, tracking the ocean and atmosphere will benefit. The solution to the assimilation objective can be computed efficiently in two steps: diffeomorphic alignment followed by amplitude adjustment. This solution allows ready use with existing methods, making it an attractive option for operational practice. The alignment formulation does not require features to be identified. This is a significant advantage in sparse observations when features cannot be clearly delineated. The alignment formulation can be extended easily to multivariate fields and can be used for a variety of velocimetry problems including particle image velocimetry, velocity from tracer-transport, and velocity from GOES and other satellite data. In relation to GOES, our approach implicitly provides quality control in terms of smoothness, and produces dense displacement fields. To complete this body of work on position-amplitude estimation, we are conducting research in the following directions:

1154

S. Ravela

1. We recently demonstrated [14] that an ensemble filter can be developed when both observations and states have position and amplitude error. This situation occurs in the context of rainfall models, where both satellite derived rain cells and model forecast cells contain position and amplitude error. 2. The position-amplitude smoother: We develop an optimal fixed-interval and fixed-lag ensemble smoother [17]. Our results show that fixed-interval ensemble smoothing is linear in the interval and fixed-lag is independent of lag length. We are extending this smoother to the position-amplitude problem. 3. New constraints: The smoothness constraint has been observed to provide weak control in certain problems. In ongoing work, we have reformulated the alignment problem using a spectral constraint on the deformation field.

References 1. T. L. Black. The new nmc moesoscale eta model: Description and forecast examples. Weather and Forecasting, 9(2):265–278, 1994. 2. F. Chen and J. Dudhia. Coupling an advanced land surface-hydrology model with the penn state-ncar mm5 modeling system. part i: Model implementation and sensitivity. Monthly Weather Review, 129(4):569–585, 2001. 3. P. Courtier. Variational methods. J. Meteor. Soc. Japan, 75, 1997. 4. C. Davis and S. Low-Nam. The ncar-afwa tropical cyclone bogussing scheme. Technical Memorandum, Air Force Weather Agency (AFWA), Omaha, NE, [http://www.mmm.ucar.edu/ mm5/mm5v3/tc-report.pdf], 2001. 5. A. Orlandi et al. Rainfall assimilation in rams by means of the kuo parameterisation inversion: Method and preliminary results. Journal of Hydrology, 288(1-2):20–35, 2004. 6. C. Onof et al. Rainfall modelling using poisson-cluster processes: A review of developments. Stochastic Environmental Research and Risk Assessment, 2000. 7. C. S. Velden et al. Upper-tropospheric winds derived from geostationary satellite water vapor observations. Bulletin of the American Meteorological Society, 78(2):173–195, 1997. 8. C. Velden et al. Recent innovations in deriving tropospheric winds from meteorological satellites. Bulletin of the American Meteorological Society, 86(2):205–223, 2005. 9. G. Evensen. The ensemble kalman filter: Theoretical formulation and practical implementation. Ocean Dynamics, 53:342–367, 2003. 10. A. Gelb. Applied Optimal Estimation. MIT Press, 1974. 11. D. J. Heeger. Optical flow from spatiotemporal filters. International Journal of Computer Vision, pages 279–302, 1988. 12. A. C. Lorenc. Analysis method for numerical weather predictin. Q. J. R. Meteorol. Soc., 112:1177–1194, 1986. 13. H.-H Nagel. Displacement vectors derived from second order intensity variations in image sequences. Computer Vision, Graphics and Image Processing, 21:85–117, 1983. 14. S. Ravela and V. Chatdarong. How do we deal with position errors in observations and forecasts? In European Geophysical Union Annual Congress, 2006. 15. S. Ravela and V. Chatdarong. Rainfall advection using velocimetry by multiresolution viscous alignment. Technical report, arXiv, physics/0604158, April 2006. 16. S. Ravela, K. Emanuel, and D. McLaughlin. Data assimilation by field alignment. Physica D (Article in Press), doi:10.1016/j.physd.2006.09.035, 2006. 17. S. Ravela and D. McLaughlin. Fast ensemble smoothing. Ocean Dynamics(Article in Press), DOI 10.1007/s10236-006-0098-6, 2007.

A Realtime Observatory for Laboratory Simulation of Planetary Circulation S. Ravela, J. Marshall, C. Hill, A. Wong, and S. Stransky Earth, Atmospheric and Planetary Sciences Massachusetts Institute of Technology [email protected]

Abstract. We present a physical, laboratory-scale analog of large-scale atmospheric circulation and develop an observatory for it. By combining observations of a hydro-dynamically unstable flow with a 3D numerical fluid model, we obtain a real-time estimate of the state of the evolving fluid which is better than either model or observations alone. To the best of our knowledge this is the first such observatory for laboratory simulations of planetary flows that functions in real time. New algorithms in modeling, parameter and state estimation, and observation targeting can be rapidly validated, thus making weather and climate application accessible to computational scientists. Properties of the fluid that cannot be directly observed can be effectively studied by a constrained model, thus facilitating scientific inquiry.

1 Introduction Predicting planetary circulation is fundamental for forecasting weather and for studies of climate change. Predictions are typically made using general circulation models (GCMs), which implement the discretized governing equations. It is well-known that the prediction problem is hard [4]. Models typically have erroneous parameters and parameterizations, uncertain initial and boundary conditions, and their numerical schemes are approximate. Thus not only will the error between physical truth and simulation evolve in a complex manner, but the PDF of the evolving model state’s uncertainty is unlikely to retain the true state within it. A way forward is to constrain the model with observations of the physical system. This leads to a variety of inference problems such as estimating initial and boundary conditions and model parameters to compensate for model inadequacies and inherent limits to predictability. Constraining models with observations on a planetary scale is a logistical nightmare for most researchers. Bringing the real world into an appropriate laboratory testbed allows one to perform repeatable experiments, and so explore and accelerate acceptance of new methods. A well-known analog of planetary fluid-flow is a thermally-driven unstable rotating flow [2,3]. In this experiment a rotating annulus with a cold center (core) and warm periphery (exterior) develops a circulation that is dynamically similar to the mid-latitude circulation in the atmosphere (see Figure 1). We have built an observatory for this laboratory experiment with the following components: Sensors to take measurements of 

This material is supported NSF CNS 0540248.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1155–1162, 2007. c Springer-Verlag Berlin Heidelberg 2007 

1156

S. Ravela et al.

Fig. 1. Image (a) shows the 500hPa heights for 11/27/06:1800Z over the northern hemisphere centered at the north pole. Winds flow along the contours. Image (b) shows a tracer (dye) in a laboratory analog. The tank is spinning and the camera is in the rotating frame. Tracer droplets initially inserted at the periphery (red dye, warm region) and around the central chilled can (green dye, cold region) has evolved to form this pattern. The laboratory analog and the planetary system are dynamically akin to one-another. We study the state-estimation problem for planetary flows using the laboratory analog.

the evolving physical system, a numerical model trying to forecast the system, and an algorithm to combine model and observations. The challenges in building such a system are rather similar to the large-scale problem, in at least four ways. Nonlinearity: The laboratory analog is nonlinear and the numerical model is the same used in planetary simulations. Dimensionality: The size of the state of the numerical model is of the same order as planetary simulations. Uncertainty: The initial conditions are unknown, and the model is imperfect relative to the physical system. Realtime: Forecasts must be produced in better than realtime. This corresponds to a time of order ten seconds in our laboratory system within which a forecast-observe- estimate cycle must be completed. In this report, we discuss the realtime system and focus on the problem of estimating initial conditions, or state. This estimation problem is posed as one of filtering and we demonstrate a two-stage assimilation scheme that allows realtime model-state estimates.

2 The Observatory The observatory, illustrated in Figure 2, has a physical and computational component. The physical component consists of a perspex annulus of inner radius 8cm and outer radius of 23cm, filled with 15cm of water and situated rigidly on a rotating table. A robotic arm by its side moves a mirror up and down to position a horizontal sheet of laser light at any depth of the fluid. Neutrally buoyant fluorescent particles are embedded in water and respond to incident laser illumination. They appear as a plane of textured dots in the 12-bit quantized, 1K × 1K images (see Figure 4) of an Imperx camera. These images are transported out of the rotating frame using a fiber-optic rotary joint (FORJ or slip-ring). The actual configuration of these elements is shown in a photograph of our rig in Figure 3.

A Realtime Observatory for Laboratory Simulation of Planetary Circulation

1157

Fig. 2. The laboratory observatory consists of a physical system: a rotating table on which a tank, camera and control system for illumination are mounted. The computational part consists of a measurement system for velocimetry, a numerical model, and an assimilation system. Please see text for description.

Fig. 3. The apparatus consists of (a) the rotating platform, (b) the motorized mirror, (c) the tank, (d) electronics, (e) a rig on which a camera is mounted, (g). Laser light comes from direction (f) and bounces off two mirrors before entering the tank. The fiber optic rotary joint (FORJ) (h) allows images to leave rotating frame and is held stably by bungee chords (i).

The computational aspects of the observatory are also shown in Figure 2. A server acquires particle images and ships them to two processors that compute optic-flow in parallel (Figure 2, labeled (OBS)). Flow vectors are passed to an assimilation program (Figure 2, labeled (DA)) that combines them with forecasts to estimate new states. These estimates become new initial conditions for the models. We now go on to discuss individual components of this system. 2.1 Physical Simulation and Visual Observation We homogenize the fluid with neutrally buoyant particles and spin the rotating platform, typically with a period of six seconds. After twenty minutes or so the fluid entrains itself

1158

S. Ravela et al.

Fig. 4. The rotating annulus is illuminated by a laser light sheet shown on the left. The camera in the rotating frame sees embedded particles shown on the right. Notice the shadow due to the chiller in the middle. The square tank is used to prevent the laser from bending at the annulus interface.

to the rotation and enters into solid body rotation. The inner core is then cooled using a chiller. Within minutes the water near the core cools and becomes dense. It sinks to the bottom to be replenished by warm waters from the periphery of the annulus, thus setting up a circulation. At high enough rotation rates eddies form; flowlines bend forming all sorts of interesting structures much like the atmosphere; see Figure 1. Once cooling commences, we turn off the lights and turn on the continuous wave 1W 532nm laser, which emits a horizontal sheet of light that doubles back through two periscoped mirrors to illuminate a sheet of the fluid volume (see Figure 4). An imaging system in the rotating frame observes the developing particle optic-flow using a camera looking down at the annulus. The ultra-small pliolite particles move with the flow. We see the horizontal component and compute optical flow from image pairs acquired 125-250ms apart using LaVision’s DaVis software. Flow is computed in 32 × 32 windows with a 16 pixel uniform pitch across the image. It takes one second to acquire and compute the flow of a single 1Kx 1K image pair. An example is shown in Figure 5. Observations are gathered over several levels, repeatedly. The mirror moves to a preprogrammed level, the system captures images, flow is computed, and the mirror moves to the next preprogrammed level and so on, scanning the fluid volume in layers. We typically observe the fluid at five different layers and so observations of the whole fluid are available every 5 seconds and used to constrain the numerical model of the laboratory experiment. 2.2 Numerical Model We use the MIT General Circulation Model developed by Marshall et al. [6,5] to numerically simulate the circulation. The MIT-GCM is freely available software and can be configured for a variety of simulations of ocean or atmosphere dynamics. We use the MIT-GCM to solve the primitive equations for an incompressible Boussinesq fluid in hydrostatic balance. Density variations are assumed to arise from changes in temperature. The domain is three-dimensional and represented in cylindrical coordinates, as shown in Figure 6(a), the natural geometry for representing an annulus. In experiments shown here, the domain is divided into 23 bins in radius (1cm/bin), 120 bins in orientation (3o bins). The vertical coordinate is discretized non uniformly using 15 levels and covering 15cm of physical fluid height, as shown in Figure 6(b). The fluid is modeled as having a free slip upper boundaries and a linear implicit free surface. The

A Realtime Observatory for Laboratory Simulation of Planetary Circulation

1159

Fig. 5. A snapshot of our interface showing model velocity vectors (yellow), and observed velocities (green) at some depth. The model vectors macroscopically resemble the observations, though the details are different, since the model began from a different initial condition and has other errors. Observations are used to constrain model states, see section 2.1.

lateral and bottom boundaries are modeled as no-slip. The temperature at the outer core is constant and at the inner core is set to be decreasing with a profile interpolated from sparse measurements in a separate experiment (see Figure 6(b)). The bottom boundary has a no heat-flux condition. We launched the model from a random initial temperature field. A 2D slice is shown in Figure 6(c). The MIT-GCM discretizes variables on an Arakawa C-grid [1]. Momentum is advected using a second-order Adams Bashforth technique. Temperature is advected using an upwind-biased direct space-time technique with a Sweby flux-limiter [7]. The treatment of vertical transport and diffusion is implicit. The 2D elliptic equation for the surface pressure is solved using conjugate gradients. In Figure 5, the model velocities are overlaid on the observed velocities after suitably registering the model geometry to the physical tank and interpolating. Despite the obvious uncertainty in initial conditions and other approximations, the model preserves the gross character of flow observed in the physical fluid, but at any instant the model-state differs from the observations, as expected. The model performs in better than realtime. On an Altix350, using one processor, we barely make it1 , but on 4 processors we are faster by a factor of 1.5. The reason for this 1

In ongoing work with Leiserson et al. we seek to speedup using multicore processors.

1160

S. Ravela et al.

Fig. 6. (a) The computational domain is represented in cylindrical coordinates. (b) Depth is discretized with variable resolution, to resolve the bottom-boundary finely. The lateral boundary conditions were obtained by interpolating sparse temperature measurements taken in a separate run and the bottom boundary is no flux. (c) shows a random initial condition field for a layer.

performance is the non-uniform discretization of the domain using nonuniform vertical levels, which is also sufficient to resolve the flow. 2.3 State Estimation An imperfect model with uncertain initial conditions can be constrained through a variety of inference problems. In this paper, we estimate initial conditions, or state estimation, which is useful in weather prediction applications. Following well-known methodology, when the distributions in question are assumed Gaussian, the estimate of state Xt at time t is the minimum of the following quadratic objective J(Xt ) = (Xt − Xtf )T Bt−1 (Xt − Xtf ) + (Yt − h(Xt ))T R−1 (Yt − h(Xt ))

(1)

Here, Xtf is the forecast at time t, B is the forecast error-covariance, h is the observation operator and R is the observation error-covariance. We use a two-stage approach. In the first stage, forecast and measured velocities at each of the five observed levels are separately assimilated to produce velocity estimates. Thermal wind is then used to map the implied temperature fields at these corresponding levels. Linear models estimated between model-variables in the vertical are finally used to estimate velocity and temperature at all model layers. Velocities at individual layers are assimilated using an isotropic background error covariance. In effect, we assume that the model does not have sufficient skill at startup and the first stage, therefore, is designed to nudge the model to develop a flow similar to the observations. Because the domain is decomposed into independent 2D assimilations and many independent 1D regressions, this stage is very fast. After running the first step for a few iterations, the second stage is triggered. Here we use an ensemble of model-states to represent the forecast uncertainty and thus use the ensemble Kalman filter to compute model-state estimates. It is impractical to do large ensemble simulations. So we use time samples of the MIT-GCM’s state and

A Realtime Observatory for Laboratory Simulation of Planetary Circulation

1161

Fig. 7. The forecast velocity field at a time t = 10min (left), observations at this layer (right) and estimated velocity (bottom). The shadow precludes measurements in a large area and produces noisy vectors at the shadow boundary.

perturb snapshots azimuthally with a mean perturbation of 0o and standard deviation 9o to statistically represent the variability of model forecasts. In this way the ensemble captures the dominant modes with very few numerical simulations. This method produces effective state estimates very efficiently. The model is spun up from a random initial condition and forecasts 10 seconds ahead in approximately 6 seconds. Ensemble members are produced by time sampling every model second and perturbed to construct 40 member forecast ensembles. Planar velocity observations of 5 layers of the model are taken in parallel. In current implementation, assimilation is performed off-line though its performance is well within realtime. In under 2 seconds, the models and observations are fused to produce new estimates. The new estimated state becomes a new initial condition and the model launches new forecasts. In Figure 7 the planar velocity of a forecast simulation, observations and estimates at the middle of the tank is shown 10 minutes into an experiment. As can be seen, the observations are noisy and incomplete due to the shadow (see Figure 4). The estimate is consistent with the observations and fills-in missing portions using the forecast. The error between the observations and estimates is substantially reduced. Please note that all 5 levels of observations are used and the entire state is estimated, though not depicted here for lack of space.

1162

S. Ravela et al.

3 Conclusions The laboratory analog of the mid-latitude circulation is a robust experiment, and the numerical model is freely available. Thus the analog serves as a new, easy-to-use, testbed. We have built a realtime observatory that to the best of our knowledge has not been reported before. Our hope is that the datasets generated here would find useful application to other researchers to apply their algorithms. Realtime performance is achieved through parallelism (observations), domain-reduction (model) and an efficient method to generate samples and compute updates (estimation). A successful observatory also opens a number of exciting possibilities. Once the numerical model faithfully tracks the physical system, properties of the fluid that cannot easily be observed (surface height, pressure fields, vertical velocities etc.) can be studied using the model. Tracer transport can be studied using numerical surrogates. Macroscopic properties such as effective diffusivity can be studied via the model. For weather prediction, the relative merits of different state estimation algorithms, characterizations of model error, strategies for where to observe, etc etc, can all be studied and results reported on the laboratory system will be credible. Of particular interest is the role of the laboratory analog for DDDAS. By building the infrastructure in the first year of this work, we can take on DDDAS aspects of this research in the second. In particular, we are interested in using the model-state uncertainty to optimize the number of observed sites, and locations where state updates are computed. In this way we expect to steer the observation process and use the observations to steer the estimation process.

Acknowledgment Jean-Michel Campin and Ryan Abernathy’s help in this work is gratefully acknowledged.

References 1. A. Arakawa and V. Lamb. Computational design of the basic dynamical processes of the ucla general circulation model. Methods in Computational Physics, Academic Press, 17:174–267, 1977. 2. R. Hide and P. J. Mason. Sloping convection in a rotating fluid. Advanced Physics, 24:47–100, 1975. 3. C. Lee. Basic Instability and Transition to Chaos in a Rapidly Rotating Annulus on a BetaPlane. PhD thesis, University of California, Berkeley, 1993. 4. E. N. Lorenz. Deterministic nonperiodic flow. J. Atmos. Sci., 20:130–141, 1963. 5. J. Marshall, A. Adcroft, C. Hill, L. Perelman, and C. Heisey. A finite-volume, incompressible navier stokes model for studies of the ocean on parallel computers. J. Geophysical Res, 102(C3):5753–5766, 1997. 6. J. Marshall, C. Hill, L. Perelman, and A. Adcroft. Hydrostatic, quasi-hydrostatic and nonhydrostatic ocean modeling. Journal of Geophysical Research, 102(C3):5733–5752, 1997. 7. P. K. Sweby. High resolution schemes using flux-limiters for hyperbolic conservation laws. SIAM Journal of Numerical Analysis, 21:995–1011, 1984.

Planet-in-a-Bottle: A Numerical Fluid-Laboratory System Chris Hill1 , Bradley C. Kuszmaul1 , Charles E. Leiserson2 , and John Marshall1 1 2

MIT Department of Earth and Planetary Sciences, Cambridge, MA 02139, USA MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA 02139, USA

Abstract. Humanity’s understanding of the Earth’s weather and climate depends critically on accurate forecasting and state-estimation technology. It is not clear how to build an effective dynamic data-driven application system (DDDAS) in which computer models of the planet and observations of the actual conditions interact, however. We are designing and building a laboratory-scale dynamic data-driven application system (DDDAS), called Planet-in-a-Bottle, as a practical and inexpensive step toward a planet-scale DDDAS for weather forecasting and climate model. The Planet-in-a-Bottle DDDAS consists of two interacting parts: a fluid lab experiment and a numerical simulator. The system employs data assimilation in which actual observations are fed into the simulator to keep the models on track with reality, and employs sensitivity-driven observations in which the simulator targets the real-time deployment of sensors to particular geographical regions and times for maximal effect, and refines the mesh to better predict the future course of the fluid experiment. In addition, the feedback loop between targeting of both the observational system and mesh refinement will be mediated, if desired, by human control.

1

Introduction

The forecasting and state-estimation systems now in place for understanding the Earth’s weather and climate consists of myriad sensors, both in-situ and remote, observing the oceans and atmosphere. Numerical fluid simulations, employing thousands of processors, are devoted to modeling the planet. Separately, neither the observations nor the computer models accurately provide a complete picture of reality. The observations only measure in a few places, and the computer simulations diverge over time, if not constrained by observations. To provide a better picture of reality, application systems have been developed that incorporate data assimilation in which observations of actual conditions in the fluid are fed into computer models to keep the models on track with the true state of the fluid. Using the numerical simulation to target the real-time deployment of sensors to particular geographical regions and times for maximal effect, however, is only a 

This research was supported by NSF Grant 0540248.

Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1163–1170, 2007. c Springer-Verlag Berlin Heidelberg 2007 

1164

C. Hill et al.

research area and not yet operational. Under the NSF DDDAS program, we are designing and building a laboratory analogue dynamic data-driven application system (DDDAS) that includes both data assimilation and sensitivity-driven observations to explore how one might proceed on the planetary scale. In meteorology, attempts to target adaptive observations to “sensitive” parts of the atmosphere is already an area of active research [11]. Observation-targeting field experiments, such as the Fronts and Atlantic Storm-Track EXperiment (FASTEX) and the NORth Pacific EXperiment (NORPEX), have demonstrated that by using, for example, objective adjoint techniques, it is possible, in advance, to identify regions of the atmosphere where forecast-error growth in numerical forecast models is maximally sensitive to the error in the initial conditions [3]. The analysis sensitivity field is then used to identify promising targets for the deployment of additional observations for numerical weather prediction. Such endeavors, although valuable, are enormously expensive and hence rare, and the experimental results are often not repeatable. In oceanography, the consortium for Estimating the Circulation and Climate of the Ocean (ECCO [13]) is an operational state-estimation system focusing on the ocean. This consortium is spearheaded by some of the authors of the present project but also involves scientists at the Jet Propulsion Laboratory (JPL), the Scripps Institute of Oceanography (SIO), as well as other MIT researchers. Ours has been a broad-based attack — we have coded new models of the atmosphere and ocean from the start with data assimilation in mind [9,10,8,1]. Both forward and adjoint models (maintained through automatic differentiation [7]) have been developed. We have also collaborated with computer scientists in the targeting of parallel computers [12] and in software engineering [5]. Unfortunately, deploying a DDDAS for ECCO or weather forecasting is currently unrealistic. Scientific methodology is uncertain, the cost would be substantial, and the technical hurdles to scaling to a global, high-resolution system, especially in the area of software, are immense. We therefore are designing and building a laboratory-scale DDDAS, which we call Planet-in-a-Bottle (or Bottle, for short), a practical and inexpensive step towards a planet-scale DDDAS. The Bottle DDDAS will emulate many of the large-scale challenges of meteorological and oceanographic state-estimation and forecasting but provide a controlled setting to allow systematic engineering strategies to be employed to devise more efficient and accurate techniques. The DDDAS will consist of two interacting parts: a fluid lab experiment and a numerical simulator. Observations taken from the laboratory experiment will feed into the simulator, allowing the simulator to refine the irregular mesh underlying the simulation and better predict the future course of the fluid experiment. Conversely, results from the numerical simulation will feed back to the physical system to target observations of the fluid, achieving a two-way interplay between computational model and observations. In addition, the feedback loop between targeting of both the observational system and mesh refinement will be mediated, if desired, by human control.

Planet-in-a-Bottle: A Numerical Fluid-Laboratory System

1165

z-arm

Fig. 1. The Planet-in-a-Bottle laboratory setup. The top row shows, on the left, a schematic of the tank-laser assembly, in the middle, the apparatus itself and, on the right, the laser sheet illuminating a horizontal plane in the fluid for the purpose of particle tracking via PIV. Note that the whole apparatus is mounted on a large rotating table. The bottom row shows, on the left, dye streaks due to circulation in the laboratory tank with superimposed velocity vectors from PIV, in the middle, eddies and swirls set up in the tank due to differential heating and rotation and, on the right, a snapshot of the temperature of the tropopause in the atmosphere (at a height of roughly 10km) showing weather systems. The fundamental fluid mechanics causing eddies and swirls in the laboratory tank in the middle is the same as that causing the weather patterns on the right.

The Planet-in-a-Bottle DDDAS will provide an understanding of how to build a planet-scale DDDAS. By investigating the issues of building a climate modeling and weather forecasting DDDAS in a laboratory setting, we will be able to make progress at much lower cost, in a controlled environment, and in a environment that is accessible to students and other researchers. The MITgcm [10, 9, 6, 2] software that we use is exactly the same software used in planet-scale initiatives, such as ECCO, and so our research promises to be directly applicable to planetscale DDDAS’s of the future. MITgcm is a CFD engine, built by investigators from this project, designed from the start with parallel computation in mind, which in particular runs on low-cost Linux clusters of processors. When this project started, both the Bottle lab and the Bottle simulator already existed (see Figure 1), but they had not been coupled into a DDDAS system. In particular, the simulator was far too slow (it ran at about 100 times real time), and it lacked sufficient accuracy. Data assimilation of observations occured off-line after the fluid experiment has been run, and the feedback loop that would allow the simulator to target observations in the fluid is not yet available. Building the Planet-in-a-Bottle DDDAS required substantial research

1166

C. Hill et al.

that we broke into two research thrusts: dynamic data-driven science for fluid modeling, and algorithms and performance. The first research thrust involves developing the dynamic data-driven science for fluid modeling in the context of MITgcm running in the Bottle environment. We are investigating adjoint models, which are a general and efficient representation of model sensitivity to any and all of the parameters defining a model. We are studying how adjoint models can be used to target observations to yield key properties of the fluid which cannot be directly measured, but only inferred by the synthesis of model and data. We are also studying their use in guiding the refinement of irregular meshes to obtain maximal detail in the Bottle simulation. We estimate that the computational demands for a practical implementation will be 5–10 times that of an ordinary forward calculation. The implementation also challenges the software-engineering infrastructure we have built around MITgcm, which today is based on MPI message passing and data-parallel computations. The second thrust of our research focuses on algorithmic and performance issues. The previous Bottle simulation is far too slow to be usable in a DDDAS environment. We believe that the performance of the simulation can be improved substantially by basing the simulation on adaptive, irregular meshes, rather than the static, regular meshes now in use, because many fewer meshpoints are required for a given solution. Unfortunately, the overheads of irregular structures can negate their advantages if they are not implemented efficiently. For example, a naive implementation of irregular meshes may not use the memory hierarchy effectively, and a poor partitioning of an irregular mesh may lead to poor load balancing or high communication costs. We are investigating and applying the algorithmic technology of decomposition trees to provide provably good memory layouts and partitions of the irregular meshes that arise from the fluid simulation.

2

The Laboratory Abstraction: Planet-in-a-Bottle

The Bottle laboratory consists of the classic annulus experiment [4], a rotating tank of water across which is maintained a lateral temperature gradient, cold in the middle (representing the earth’s pole), warm on the outside (representing the equator). The apparatus is shown in Figure 1. Differential heating of the rotating fluid induces eddies and geostrophic turbulence which transfer heat radially from “equator to pole,” and hence the “Planet-in-a-Bottle” analogy. This class of experiment has been a cornerstone of geophysical fluid dynamics and serves as a paradigm for the fluid dynamics and hydrodynamical instability processes that underlie our weather and key processes that maintain the poleequator temperature gradient of the planet. We use a video camera to capture the flow in a chosen horizontal plane by introducing neutrally buoyant pliolite particles into the fluid and illuminating them with a laser sheet. The top right panel of Figure 1 shows the laser illuminating pliolite particles at a particular depth. The captured images are recorded at a rate

Planet-in-a-Bottle: A Numerical Fluid-Laboratory System

1167

of 30 frames per second with a resolution of 1024×780 pixels. The video images are fed to an image-processing system, which calculates the velocity field of the particles using PIV (particle imaging velocimetry). In addition, an array of sensors in the fluid record the temperature. The resulting data is fed into the Bottle simulator. The Bottle simulator consists of a computer system running an ensemble of 30 simulation kernels, each of which runs the MITgcm, a computational fluid dynamics (CFD) code that we have developed. As illustrated in Figure 2, the simulator process divides time into a series of epochs, each with a duration about the same as one rotation of the laboratory experiment (representing one day, or in the laboratory, typically on the order of 10 seconds). At the beginning of each epoch, the simulator initializes each of the 30 kernels with an ensemble Kalman filter derived estimate of the current state of the fluid based on the observed fluid state and the simulation state from the preceeding epoch. The simulator perturbs the initial state of each kernel in a slightly different manner. The 30 kernels then run forward until the end of the epoch, at which point the results are combined with the next frame of observations to initialize the next epoch. The DDDAS we are building will enhance the current data assimilation with real-time, sensitivity-driven observation. The assimilation cycle will be preceeded by a forecast for the upcoming epoch. At the end of the forecast, using the adjoint of the forward model, a sensitivity analysis will determine which locations in the fluid most affect the outcomes in which we are interested. The simulator will then direct the motor controlling the laser sheet to those particular heights in the fluid and direct the camera to zoom in on the particular regions. A dynamically driven set of observations will then be made by the camera, the image-processor will calculate the velocity field of the particles, and the simulator will compute a new estimate of the fluid state. In addition, the Planet-in-a-Bottle system will dynamically refine the mesh used by the Kalman filter simulation kernels so that more computational effort is spent in those regions to which the outcome is most sensitive. This forecast and assimilation process will be repeated at each epoch. This is a direct analogue of the kind of large scale planetary estimates described in, for example, [13]. Everything about the laboratory system — the assimilation algorithms, the simulation codes, the observation control system, the real-time constraints, and the scalable but finite compute resources is practically identical to technologies in operational use in oceanography and meteorology today. Moreover — and this is at the heart of the present project — it provides an ideal opportunity to investigate real-time data-driven applications because we can readily (i) use the adjoint of our forward model to target observations of an initial state to optimise the outcome (ii) refine the numerical grid in regions of particular interest or adjoint sensitivity (iii) experiment with intervention by putting humans in the loop, directly controlling the observations and/or grid refinement process. Although the physical scale of the laboratory experiment is small, it requires us to bring up the end-to-end combined physical-computational system and consider many issues encountered in the operational NWP and

1168

C. Hill et al.

Fig. 2. Schematic of the ensemble Kalman filter. The interval between Kalman filter updates is one rotation period of the fluid experiment (typically ≈ 10s). The physical system (on the left) is observed with noisy and incomplete observations. An ensemble (on the right) of simulation kernels is run, each from a perturbed initial state, producing simulations with an error estimate. A filter combines the observations with the ensembles to yield a new updated state for the next epoch.

large-scale state-estimation communities. Notably, the MITgcm software we currently use for numerical simulation of the laboratory fluid is also in use in planetary scale initiatives. Consequently, innovations in computational technologies tested in the laboratory will directly map to large-scale, real-world problems.

3

Progress

Our team has made significant progress on our Bottle DDDAS in the first year. We have finished developing about 70% of the physical infrastructure, 80% of the computational infrastructure, and have developed a first-version of the endto-end system demonstrating real-time simulation, measurement and estimation processes. We have developed robust protocols for physical simulation, instrumented the system to acquire and process data at a high-bandwidth. Currently the system can process about 60 MBytes/sec of raw observational and model data to produce state estimates in real-time. The system includes a camera and a fiber-optic rotary joint, a laser light-sheet apparatus, a robotic arm to servo the light sheet to illuminate a fluid plane, rotating fluid homogenized with neutrally

Planet-in-a-Bottle: A Numerical Fluid-Laboratory System

1169

buoyant particles. A thermal control subsystem is presently being integrated to produce climatological forcing signals of temperature gradient (and hence heat flux). Observations are produced using a particle image velocimetry method, procedures for which were perfected in the first year. A distributed computing infrastructure has been developed for generating velocity measurements. We now frequently use this subsystem for gathering observations. We have made refinements to the MIT-GCM using a nonuniform domain decomposition, and this is coupled with a 3rd order accurate advection scheme. The model functions in realtime on an Altix 350. Our goal has been to develop a high-performance application that is also superbly interactive. The observation subsystem is designed to be easily reconfigurable, but the details of the distributed computation are entirely hidden from the user. Similarly, easy to configure model interface is being developed to allow the user to dynamically reconfigure the model parameters whilst leaving the actual implementation of computation out of the picture. Both observations and models perform in realtime, thus the effect of parameter changes are observed in realtime, and this is the basis for user-perceived interactivity. Dataassimilation is implemented using matlab; a language most researchers know. We are incorporating the StarP system and thus the implementation of distributed computation is hidden from the user. Students and researchers alike can use, almost verbatim, their code on the large-scale problem and change algorithms rapidly. We believe that this architecture strikes the right balance between performance and interaction, and we hope to see its benefits as we incorporate it in research and the classroom in the next two years. We have demonstrated that data-assimilation inference from data and models can also be performed in real-time. We have developed a two-stage approach that automatically switches assimilation from a weak prior (low model skill) mode to a strong prior mode. Both these function in real-time. The latter mode uses an ensemble to construct subspace approximations of the forecast uncertainty. The ensemble is generated robustly by combining perturbations in boundary conditions with time-snapshots of model evolution. This new approach entirely diminishes the computational bottleneck on the model performance because only few model integrations are required, and surrogate information of the state’s uncertainty is gleaned from exemplars constructed from the temporal evolution of the model-state. A forty member ensemble (typical in meteorological use) thus requires around 4 separate model simulations; an easy feat to accomplish. Work on two other ensemble-based assimilation methods has also progressed. First, a fast ensemble smoother (Ocean Dynamics, to appear) shows that fixedinterval smoothing is O(n) in time, and fixed-lag smoothing is independent of the lag length. Second, we have developed a scale-space ensemble filter that combines graphical, multiscale models with spectral estimation to produce rapid estimates of the analysis state. This method is superior to several popular ensemble methods and performs in O(n log n) of state size n, and is highly parallelizable.

1170

C. Hill et al.

References 1. A. Adcroft, J-M Campin, C. Hill, and J. Marshall. Implementation of an atmosphere-ocean general circulation model on the expanded spherical cube. Mon. Wea. Rev., pages 2845–2863, 2004. 2. A. Adcroft, C. Hill, and J. Marshall. Representation of topography by shaved cells in a height coordinate ocean model. Mon. Wea. Rev., pages 2293–2315, 1997. 3. N. Baker and R. Daley. Observation and background adjoint sensitivity in the adaptive observation-targeting problem. Q.J.R. Meteorol. Soc, 126(565):1431– 1454, 2000. 4. M. Bastin and P. Read. A laboratory study of baroclinic waves and turbulence in an internally heated rotating fluid annulus with sloping endwalls. J. Fluid Mechanics, 339:173–198, 1997. 5. C. Hill, C. DeLuca, Balaji, M. Suarez, and A. DaSilva. The architecture of the Earth System Modeling Framework. Computing in Science and Engineering, 6(4):18–28, 2004. 6. C. Hill and J. Marshall. Application of a parallel Navier-Stokes model to ocean circulation. In Proceedings of Parallel Computational Fluid Dynamics: Implementations and Results Using Parallel Computers, pages 545–552, 1995. 7. J. Marotzke, R. Giering, K. Q. Zhang, D. Stammer, C. Hill, and T. Lee. Construction of the adjoint MIT ocean general circulation model and application to Atlantic heat transport sensitivity. J. Geo. Res., 104(C12):29,529–29,547, 1999. 8. J. Marshall, A. Adcroft, J-M. Campin, C. Hill, and A. White. Atmosphere-ocean modeling exploiting fluid isomorphisms. Monthly Weather Review, 132(12):2882– 2894, 2004. 9. J. Marshall, A. Adcroft, C. Hill, L. Perelman, and C. Heisey. A finite-volume, incompressible Navier Stokes model for studies of the ocean on parallel computers. J. Geophys. Res., 102, C3:5,753–5,766, 1997. 10. J. Marshall, C. Hill, L. Perelman, and A. Adcroft. Hydrostatic, quasi-hydrostatic and nonhydrostatic ocean modeling. J. Geophys. Res., 102, C3:5,733–5,752, 1997. 11. T.N. Palmer, R. Gelaro, J. Barkmeijer, and R. Buizza. Vectors, metrics, and adaptive observations. J. Atmos. Sci, 55(4):633–653, 1998. 12. A. Shaw, Arvind, K.-C. Cho, C. Hill, R. P. Johnson, and J. Marshall. A comparison of implicitly parallel multi-threaded and data-parallel implementations of an ocean model based on the Navier-Stokes equations. J. of Parallel and Distributed Computing, 48(1):1–51, 1998. 13. D. Stammer, C. Wunsch, R. Giering, C. Eckert, P. Heimbach, J. Marotzke, A. Adcroft, C. Hill, and J. Marshall. Volume, heat, and freshwater transports of the global ocean circulation 1993–2000, estimated from a general circulation model constrained by World Ocean Circulation Experiment (WOCE) data. J. Geophys. Res., 108(C1):3007–3029, 2003.

Compressed Sensing and Time-Parallel Reduced-Order Modeling for Structural Health Monitoring Using a DDDAS J. Cortial1 , C. Farhat1,2 , L.J. Guibas3 , and M. Rajashekhar3 1

Institute for Computational and Mathematical Engineering 2 Department of Mechanical Engineering 3 Department of Computer Science Stanford University Stanford, CA 94305, U.S.A [email protected],[email protected], [email protected],[email protected]

Abstract. This paper discusses recent progress achieved in two areas related to the development of a Dynamic Data Driven Applications System (DDDAS) for structural and material health monitoring and critical event prediction. The first area concerns the development and demonstration of a sensor data compression algorithm and its application to the detection of structural damage. The second area concerns the prediction in near real-time of the transient dynamics of a structural system using a nonlinear reduced-order model and a time-parallel ODE (Ordinary Differential Equation) solver.

1

Introduction

The overall and long-term goal of our effort is to enable and promote active health monitoring, failure prediction, aging assessment, informed crisis management, and decision support for complex and degrading structural engineering systems based on dynamic-data-driven concepts. Our approach involves the development a Dynamic Data Driven Applications System (DDDAS) that demonstrates numerically, as much as possible, its suitability for structural health monitoring and critical event prediction. An outline of this approach, its objectives, and preliminary architectural concepts to support it can be found in [1]. The main objective of this paper is to describe progress achieved in two areas of activity that are directly related to drastically reducing the overall sensing and computational cost involved in the utilization of such a DDDAS. For this purpose, our efforts draw experiences and technologies from our previous research on the design of a data-driven environment for multiphysics applications (DDEMA) [2,3]. The first area of progress pertains to the development and implementation of an efficient data compression scheme for sensor networks that addresses the issue of limited communication bandwidth. Unlike other data compression algorithms, Y. Shi et al. (Eds.): ICCS 2007, Part I, LNCS 4487, pp. 1171–1179, 2007. c Springer-Verlag Berlin Heidelberg 2007 

1172

J. Cortial et al.

this scheme does not require the knowledge of the compressing transform of the input signal. It constitutes a first step towards the efficient coupling between a given sensor network and a given computational model. The second area of progress pertains to the development and implementation of a numerical simulator for the prediction in near real-time of the transient dynamics of a structural system. The new aspect of this effort is the generalization to nonlinear second-order hyperbolic problems of the PITA (Parallel Implicit Time-integration Algorithms) framework developed in [4,5] for linear time-dependent problems. Preliminary verification and performance assessment examples that furthermore illustrate the basic concepts behind both methodologies outlined above are also provided in this paper.

2

An Efficient Data Compression Scheme for Sensor Networks

In the context of a DDDAS for health monitoring and critical event prediction, the sensor network collects the monitored data of the structure and sends it to the numerical simulator. This simulator is equipped with full- and reducedorder computational models that are validated for healthy states of the structure. Differences between sensed and computed (possibly in near real-time) data, or sensed data and data retrieved in real-time from a computational data base in the simulator, form a special class of sparse signals that can be used to locate, assess, and possibly predict the evolution of structural damage as well as dynamically update the computational models [6,7]. Hence, the sensor network acts in general as the source of the data used for guiding and driving numerical simulations and/or updating the underlying computational models. Communication bandwidth is a very limited resource in sensor networks. For this reason, the trivial approach where all sensor readings are sent to the simulator is unlikely to be deployed. Compressed sensing offers a venue to a different approach that avoids expensive communication between the sensor network and the numerical simulator. Such an alternative approach is not only more practical for health monitoring in general, but also essential for critical event prediction. In this section, we discuss our chosen data compression scheme whose theory was developed in [8]. Unlike many other data compression algorithms, this scheme does not require any information about the compressing transform of the input signal. It only assumes that the signal is compressible — that is, it has a sparse representation in some orthogonal basis. In health monitoring applications, this assumption typically holds given that as mentioned above, a signal usually consists of the difference between a measured information and a predicted one. Since a sparse signal can be reconstructed from the knowledge of a few random linear projections [8], collecting only these few random projections constitutes a feasible mechanism for data compression and provides the sought-after saving in communication time.

Compressed Sensing and Time-Parallel Reduced-Order Modeling

2.1

1173

Compressed Sensing Overview

Let x0 ∈ Rm denote a signal of interest that has a sparse representation in some orthogonal basis, and φ ∈ Rn×m , with n < m, denote the random matrix generated by the “uniform spherical ensemble” technique described in [8]. Consider the vector y = φx0 . This matrix-vector product represents the result of n random linear projections — or samples — of the original signal x0 . From the knowledge of y, one can reconstruct an approximation x ˜0 of x0 as follows x ˜0 = arg min ||x||1 subject to y = φx x

(1)

The reconstruction technique summarized by the above equation involves essentially linear, non-adaptive measurements followed by a nonlinear approximate reconstruction. In [8], it is proved that despite its undersampling (n < m), this technique leads to an accurate reconstruction x ˜0 when n = O(N log(m)), where N denotes the number of largest transform coefficients of x0 — that is, the number of transform coefficients of x0 that would be needed to build a reasonable approximation of this signal. For example, for a k-sparse system — that is, a system with only k values different from zero or larger in absolute value than a certain minimum threshold — N = k. 2.2

Implementation

In our proposed sensing approach, we avoid expensive communication between the sensor network and the simulator by communicating to the latter only summaries of the original sensor readings. These summaries contain however enough information to allow the extraction of the deviant sensor readings and localize the sensors announcing deviant information. Here, “deviant” refers to a measurement that is significantly different from an expected value obtained by simulation and therefore is indicative, for example, of structural damage. More specifically, our approach to compressed sensing is composed of two main steps that are described next. Extraction of Compressed Summaries. Let s0 ∈ Rm and p0 ∈ Rm denote the measured and numerically predicted readings at m sensor nodes, respectively. We compute n (n

(2)

where, X is a set of input events; Y is a set of output events; D is a set of components names; For each i in D , M i is a component model; I i is the set of influencees for i; For each j in I i , Z i , j is the i-to-j output translation function. It includes three cases: (1) Z i , j : X → X j , if i = N ; (2) Z i , j : Y j → Y , if j = N ; (3) Z i , j : Yi → X j , if i ≠ N and j ≠ N . Above three functions are also called EIC, EOC and IC。

1248

S. Han and K. Huang

2.2 Timed Transition System (TTS)

A Timed Transition system Tt is represented as a 5-tuple: Tt =< S , init , Σ, D, T >

(3)

where S is a possibly infinite set of states; init is an initial state; Σ = Act ∪ R0+ is the alphabet, where Act is a set of discrete actions; x D is a set of discrete transitions, noted s ⎯⎯ → s ' , where x ∈ Σ and s, s ' ∈ S , asserting that “from state s the system can instantaneously move to state s ' via the occurrence of the event x ”; d T is a set of time-passage transitions, noted s ⎯⎯ → s ' , where s, s ' ∈ S , asserting that “from state s the system can move to state s ' during a positive amount of time d in which no discrete events occur”. Time-passage transition is assumed to satisfy two axioms: d d' d +d ' Axiom 1: If s ⎯⎯ → s ' and s ' ⎯⎯ → s '' , then s ⎯⎯⎯ → s '' ; d Axiom 2: Each time-passage step s ⎯⎯ → s ' has a trajectory.

A I -trajectory is defined as a The trajectory used usually is I -trajectory. function ω : I → S , where I is a closed interval of real value beginning with 0. ω also d '− d → ω (d ') . We satisfy following property: for ∀d , d ' ∈ I , if d < d ' , then ω (d ) ⎯⎯⎯ often use ω.ltime to represent the supremum of I , ω. fstate to denote the first state

ω (0) in I , and ω.lstate to denote the last state ω (ω.ltime) in I . So, there is a [0, d ] d trajectory with transition s ⎯⎯ → s ' , where ω. fstate = s and ω.lstate = s ' . Timed execution fragment: Given a TTS, its timed execution fragment is a finite altering sequence γ = ω0 a1ω1a2ω2 anωn , where ωi is a I -trajectory, ai is a discrete

ai +1 event and ωi .lstate ⎯⎯→ ωi +1 . fstate . The length and initial state of γ is noted as

γ .ltime and γ .ltime , where γ .ltime = Σiωi .ltime and γ . fstate = ω0 . fstate . All possible

timed execution fragments of Tt is described as a set execs(Tt ) .

Timed trace: Every timed execution fragment γ = ω0 a1ω1a2ω2 anωn has an according timed trace noted trace(γ ) , which is defined as an altering sequence consists of pairs noted (ai , ωi .ltime) , where the orders of ai in trace(γ ) is just the same as they occur in γ . All possible timed traces of Tt is described as traces(Tt ) . 2.3 Semantic of DEVS Model Based on TTS

Behavior semantic of atomic Parallel DEVS model can be described by timed execution fragment and time trace. For a Parallel DEVS model defined in equation (1), its execution fragment is a finite altering sequence γ = ω0 a1ω1a2ω2 anωn which includes two kinds of transitions:

Equivalent Semantic Translation from Parallel DEVS Models to Time Automata

1249

Time-passage transition: each ωi in γ is a map from interval I i = [0, ti ] to global state space of D . For ∀j , j ' ∈ I i | j < j ' , if ωi ( j ) = ( s, e) , then ωi ( j ') = ( s, e + j '− j ) . Discrete event transition: discrete events in parallel can be divided into two categories: input and output events. According to the execution of DEVS, when they occur concurrently, output events must be dealt first. As shown in abstract simulator of DEVS, when an atomic model is going to send output, it will receive a event “ ∗ ” first, “ ∗ ”event and output event can been seen as a pair because they must be concatenated simultaneity; if there is no input event when output event occurs, the component will receive an empty event φ . If ( s, e) = ωi −1 (sup( I i −1 )) and ( s ', e ') = ωi (inf( I i )) , one of following conditions should be satisfied: ai = *, e = ta( s ), ( s ', e ') = ( s, e) and λ ( s ) ∈ Y

(3.1)

ai ∈ X , δ ext ( s, e, ai ) = s ', e ' = 0 and e < ta( s)

(3.2)

ai ∈ X , δ con ( s, ai ) = s ', e ' = 0 and e = ta( s)

(3.3)

ai = φ , δ int ( s) = s ', e ' = 0 and e = ta( s)

(3.4)

If two timed systems have the same timed trace set, they have the same timed behavior set obviously. The behavior semantic of Parallel DEVS model can be described by a TTS which has the same timed trace set of the DEVS model. For a DEVS model M = < X , Y , S , s0 , δ int , δ ext , δ con , λ , ta > , there is an translation noted Tts upon it, which can translate M into a semantic equivalent TTS:

Tts ( M ) =< S M , initM , Σ M , DM , TM >

(4)

with: SM = QM = {( s, e) | s ∈ S , 0 ≤ e ≤ ta( s)} is the state set, it is equal to the global state space of M ; init M = ( s0 , 0) is the initial state; Σ M = ( X ∪ {φ , ∗}) ∪ Y ∪ R0+ is the alphabet;

DM is the discrete event transition, for ∀x ∈ (( X ∪ {φ }) ∪ Y ) , it is defined as: x DM = {( s, e) ⎯⎯ →( s ', 0) | ( s, e) and s ' satisfy the condition described in equation(3)}

TM is the time-passage transition, for ∀d ∈ R0+ , it is defined as: d TM = {( s, e) ⎯⎯ →( s, e ') | ( s, e) ∈ QM , e ' = e + d , 0 ≤ e ' ≤ ta( s )}

Obviously, Tts ( M ) has the equivalent semantic with M .

3 Semantic of Coupled Parallel DEVS Model Based on TA 3.1 Timed Automata

A timed automata consists of a finite automata augmented with a finite set of clock variables, and transitions with clock constraints, additionally the locations can contain local invariant conditions.

1250

S. Han and K. Huang

For a set of clocks noted X , clock constraint over it is defined by:

ϕ := x ∼ c | x − y ∼ c | ¬ϕ | ϕ1 ∧ ϕ2 , where x and y are clocks in X , c is a ϕ ϕ nonnegative real value integer constant, 1 and 2 are clock constraints. The set of all constraints over X is noted Φ ( X ) . + A clock assignment over a set X of clocks is a function v : X → R0, ∞ , it assign each clock a real value. If a clock constraint ϕ is true for all clocks under a clock assignment v , we call v satisfy ϕ , noted v ∈ ϕ . For Y ⊆ X , [Y t ]v means each clock x ∈ Y is assigned a value t , while each clock in X − Y satisfy v . A timed automata is defined as a 6-tuple: A =< L, l0 , Σ, C , I , E > , where L is a finite set of location; l0 is an initial location; Σ is a finite alphabet; C is a finite set of

clock; I : L → Φ( X ) is a map, it assign each l ∈ L a clock constraint in Φ ( X ) ;

E ⊆ L × Σ × Φ(C ) × 2C × L is a set of transition, < l , a, ϕ , λ , l ' > means a transition from l to l ' when an action a occurs and ϕ is satisfied by all clocks, λ ⊆ C is the set of clocks needed to be reset when the transition occurs. < l , a, ϕ , λ , l ' > is also noted a ,φ , λ l ⎯⎯⎯ →l ' . For a timed automata A =< L, l0 , Σ, C , I , E > , its semantic is defined as a TTS: Tt =< S A , init A , Σ A , DA , TA > where S A is a set of state which consists of a pair (l , v ) , where l ∈ L and v ∈ I (l ) ;

init A = (l0 , v0 ) , v0 ∈ I (l0 ) is a clock assignment, and for each x ∈ C , v0 ( x) = 0 ;

Σ A = Σ ∪ R0,+ ∞ is the alphabet; TA is a set of time-passage transition, for a state (l , v) and a nonnegative real value d d ≥ 0 , if each 0 ≤ d ' ≤ d satisfies (v + d ') ∈ I (l ) , then (l , v) ⎯⎯ →(l , v + d ) ; DA is a set of discrete event transition, for a state (l , v) and a transition a < l , a, ϕ , λ , l ' > , if v ∈ ϕ , then (l , v) ⎯⎯ →(l ',[λ

0]v) and [λ

0]v ∈ I (l ') .

3.2 From Atomic Finite DEVS to TA

TA has more expressiveness than DEVS model to model timed systems. In this section, we will seek to find a translation function Taut which can translate a Parallel DEVS model M to a trace equivalent TA noted Taut ( M ) . As shown in Fig.1, TTS is selected as the common semantic base for these two models. Because infinite DEVS model cannot be formal verified for its infinite global state space, we only consider finite Parallel DEVS here, the finiteness of DEVS model include two aspects: first, the number of states is finite; second, the number of external transitions should be also finite, i.e. each external transition function should be a finite piecewise function.

Equivalent Semantic Translation from Parallel DEVS Models to Time Automata

1251

Fig. 1. Illustration of semantic equivalence between DEVS model M and its according TA model Taut ( M ) . TTS model is the middle base for this equivalence.

For an atomic DEVS model M = < X , Y , S , s0 , δ int , δ ext , δ con , λ , ta > , there is an equivalent semantic translation noted Taut upon it, which can translate a DEVS model into a semantic equivalent TA (shown in Fig.2):

Taut ( M ) =< LM , l0 ( M ), Σ M , CM , I M , EM >

(5)

with LM = S is a set of location;

l0 ( M ) = s0 is an initial location; Σ M = { X ∪ {φ , ∗}} ∪ Y is an alphabet; CM is a clock set, because there is a time variable e in atomic DEVS model, so there is also a clock in CM , noted CM = {e} ; I M : LM → Φ (CM ) is a map, for ∀l ∈ LM , I M (l ) : 0 ≤ e < ta(l ) ; a ,ϕ ,κ EM ⊆ LM × Σ M × Φ(CM ) × 2CM × LM is the set of transition, for (l ⎯⎯⎯ → l ') ∈ EM , it includes three cases:

(1) If a = ∗ , then (ϕ : e = ta(l )) ∧ (κ = {}) ∧ (l ' = l )) ;

(6.1)

(2) If a ∈ X , then (ϕ : 0 < e < ta(l )) ∧ (κ = {e}) ∧ (l ' = δ ext ((l , e), a)) or

(6.2)

(ϕ : e = ta(l )) ∧ (κ = {e}) ∧ (l ' = δ con (l , a)) ; (3) If a = φ , then (ϕ := e = ta(l )) ∧ (κ = {e}) ∧ (l ' = δ int (l )) ;

(6.3)

Fig. 2. Illustration of the translation from DEVS model M to its according TA model Taut ( M ) . Where “!” refers to output event, “?” refers to received event, “*” refers to an internal event sent by the coordinator of M , which will be discussed in section3.3.

1252

S. Han and K. Huang

It is easy to find that the semantic of Taut ( M ) is just also the TTS Tts ( M ) defined in equation (4). Thereby, the semantic of Taut ( M ) is equal to the semantic of M . 3.3 From Coupled DEVS Model to Composition of TAs

Given two TAs A1 =< L1 , l10 , Σ1 , C1 , I1 , E1 > , A2 =< L2 , l20 , Σ 2 , C2 , I 2 , E2 > and two sets of clock C1 and C2 don’t intersect, the parallel composition of them is a TA noted A1 || A2 =< L1 × L2 , l10 × l20 , Σ1 ∪ Σ 2 , C1 ∪ C2 , I , E > , where I (l1 , l2 ) = I (l1 ) ∧ I (l2 ) and transition E is defined as follows: (1) For ∀a ∈ Σ1 ∩ Σ 2 , if transition < l1 , a, ϕ1 , λ1 , l1' > and < l2 , a, ϕ 2 , λ2 , l2' > exist in

E1 , E2 respectively, then < (l1 , l2 ), a, ϕ1 ∧ ϕ 2 , λ1 ∪ λ2 , (l1' , l2' ) > is included in E ; (2) For ∀a ∈ Σ1 \ Σ 2 , transition < l1 , a, ϕ1 , λ1 , l1' > in E1 and l2 ∈ L2 , transition

< (l1 , l2 ), a, ϕ , λ1 , (l1' , l2 ) > is included in E ; (3) For ∀a ∈ Σ 2 \ Σ1 , transition < l2 , a, ϕ 2 , λ2 , l2' > in E2 and l1 ∈ L1 , transition

< (l1 , l2 ), a, ϕ 2 , λ1 , (l1 , l2' ) > is included in E ; The composition of DEVS model is completed by a coordinator, which has the same interface as an atomic DEVS model and it can also be included in another coordinator. The communication among DEVS models is a weak synchronization relationship which is controlled by the coordinator: when time elapsed equal to the time advancement, state transition of child model must occurs in parallel after all of them have sent the output. This weak synchronization can be depicted by channels in TA, where a channel is signed by “?” and “!” event with the same event name. In this way, the coupled model of a coordinator and its child models have just the same behavior as parallel composition of TAs (shown in Fig.3.) according with them.

Fig. 3. Illustration of the translation from coordinator of coupled DEVS model to its according TA model. Where “c” is the coordinator discussed here; “!” refers to output event, “?” refers to received event, “*parent” refers to an internal event for driven output sent by the parent of the coordinator; “*” is the internal event for driving output the coordinator send to its child models; “Xc”, “Yc” refer to input and output event the coordinator interact with outside environment; “Xi” , “Yi” refers to the input and output events the coordinator send to and receive from its child models, “Xi”; “eL” is the time of last event; “eN” is the time of next output event scheduled, function NexteN () refers the function to update the value of eN when transition has just occurred. In many kinds of TAs (such as TA defined in UPPAAL[7]), global variables are allowed in TA, so Xi, Yi, Xc, and Yc can all be defined as integer value or integer vectors, which will improve the efficiency. “eG” is the global virtual time controlled by the root coordinator.

Equivalent Semantic Translation from Parallel DEVS Models to Time Automata

1253

3.4 Semantic of the Whole DEVS Model

A virtual simulation system is a closed system; the execution of the whole simulation model is controlled by a time management unit which is due to schedule the time advancement of the whole system. In DEVS based simulation, the time advancement is managed by a special coordinator named root coordinator, which has no interaction with outside environment and only be responsible for advancing global virtual time eG and translating the output of one of its child model to the input of another child model. It is a reduced coordinator and its according TA is showed in Fig.4.

Fig. 4. Illustration of the translation from root coordinator of the whole DEVS simulation model to its according TA model. Where “r” refers to the root coordinator discussed here; “*Root” refers to an internal event for driven output of its child models; “eG” is the global virtual time; function NexteG () is the function to update the value of eG when a simulation step has just passed. Internal event between it and its child models noted “Xj” and “Yw” are also be defined as integer value or integer vectors like Fig.3.

4 Conclusion Trace equivalence is laid as the basis of the translation from Parallel model to Timed Automata model. This is an observable equivalence because trace can be observed from outside. This kind of equivalence can be used in DEVS component based simulation where the things we concerned most are not the internal detail of the components but the behaviors of them. Based on this equivalent translation, formal DEVS model verification can be realized easily using existing model checking methods and tools upon timed automata.

References 1. Darema, F.: Dynamic Data Driven Application systems: A New Paradigm for Application Simulations and Measurements. Lecture Notes in Computer Science, Vol. 3038. SpringerVerlag, Heidelberg (2004) 662–669 2. Zeigler, B.P., Praehofer, H., Kim, T.G.: Theory of Modeling and Simulation, 2nd edn. Academic Press, New York (2000) 3. Hu, X., Zeigler, B.P., Mittal, S.: Variable Structure in DEVS Component-Based Modeling and Simulation. SIMULATION: Transaction of the Society for Modeling and Simulation International, Vol.81. (2005)91–102 4. Alur, R., Dill, D.L.: A theory of timed automata. Theoretical Computer Science, Vol. 126. (1994) 183–235 5. Labiche, D.: Towards the Verification and Validation of DEVS Models. In: Proceedings of the Open International Conference on Modeling & Simulation. (2005)295–305 6. Dacharry, H.P., Giambiasi, N.: Formal Verification with timed automata and DEVS models. In: Proceedings of sixth Argentine Symposium on Software Engineering. (2005)251–265 7. Larsen, K.G., Pettersson, P.: Uppaal in a Nutshell. Int. Journal on Software Tools for Technology Transfer, Vol. 1. (1997)134–152

Author Index

Ab´ anades, Miguel A. II-227 Abbate, Giannandrea I-842 Abdullaev, Sarvar R. IV-729 Abdullah, M. I-446 Acevedo, Liesner I-152 Adam, J.A. I-70 Adriaans, Pieter III-191, III-216 Adrianto, Indra I-1130 Agarwal, Pankaj K. I-988 Ahn, Chan-Min II-515 Ahn, Jung-Ho IV-546 Ahn, Sangho IV-360 Ahn, Sukyoung I-660 Ahn, Woo Hyun IV-941 Ahn, Young-Min II-1202, II-1222 Ai, Hongqi II-327 Al-Sammane, Ghiath II-263 Alexandrov, Vassil I-747, II-744, II-768, II-792 Alfonsi, Giancarlo I-9 Alidaee, Bahram IV-194 Aliprantis, D. I-1074 Allen, Gabrielle I-1034 Alper, Pinar II-712 Altintas, Ilkay III-182 ´ Alvarez, Eduardo J. II-138 An, Dongun III-18 An, Sunshin IV-869 Anthes, Christoph II-752, II-776 Araz, Ozlem Uzun IV-973 Archip, Neculai I-980 Arifin, B. II-335 Aristov, V.V. I-850 Arslanbekov, Robert I-850, I-858 Arteconi, Leonardo I-358 Aslan, Burak Galip III-607 Assous, Franck IV-235 Atanassov, E. I-739 Avolio, Maria Vittoria I-866 Awan, Asad I-1205 Babik, Marian III-265 Babuˇska, I. I-972 Ba¸ca ˜o, Fernando II-542

Bae, Guntae IV-417 Bae, Ihn-Han IV-558 Baek, Myung-Sun IV-562 Baek, Nakhoon II-122 Bai, Yin III-1008 Bai, Zhaojun I-521 Baik, Doo-Kwon II-720 Bajaj, C. I-972 Balas, Lale I-1, I-38 Bali´s, Bartosz I-390 Balogh, Zoltan III-265 Baloian, Nelson II-799 Bang, Young-Cheol III-432 Bao, Yejing III-933 Barab´ asi, Albert-L´ aszl´ o I-1090 Barabasz, B. I-342 Barrientos, Ricardo I-229 Barros, Ricardo III-253 Baruah, Pallav K. I-603 Bashir, Omar I-1010 Bass, J. I-972 Bastiaans, R.J.M. I-947 Baumgardner, John II-386 Baytelman, Felipe II-799 Bayyana, Narasimha R. I-334 Bechhofer, Sean II-712 Beezley, Jonathan D. I-1042 Bei, Yijun I-261 Bell, M. I-1074 Belloum, Adam III-191 Bemben, Adam I-390 Benhai, Yu III-953 Benkert, Katharina I-144 Bennethum, Lynn S. I-1042 Benoit, Anne I-366, I-591 Bervoets, F. II-415 Bhatt, Tejas I-1106 Bi, Jun IV-801 Bi, Yingzhou IV-1061 Bidaut, L. I-972 Bielecka, Marzena II-970 Bielecki, Andrzej II-558 Black, Peter M. I-980 Bo, Hu IV-522

1256

Author Index

Bo, Shukui III-898 Bo, Wen III-917 Bochicchio, Ivana II-990, II-997 Bosse, Tibor II-888 Botana, Francisco II-227 Brendel, Ronny II-839 Bressler, Helmut II-752 Brewer, Wes II-386 Brill, Downey I-1058 Brooks, Christopher III-182 Browne, J.C. I-972 Bu, Jiajun I-168, I-684 Bubak, Marian I-390 Bungartz, Hans-Joachim I-708 Burguillo-Rial, Juan C. IV-466 Burrage, Kevin I-778 Burrage, Pamela I-778 Bushehrian, Omid I-599 Byon, Eunshin I-1197 Byrski, Aleksander II-928 Byun, Hyeran IV-417, IV-546 Byun, Siwoo IV-889 Cai, Guoyin II-569 Cai, Jiansheng III-313 Cai, Keke I-684 Cai, Ming II-896, III-1048, IV-725, IV-969 Cai, Ruichu IV-1167 Cai, Shaobin III-50, III-157 Cai, Wentong I-398 Cai, Yuanqiang III-1188 Caiming, Zhang II-130 Campos, Celso II-138 Cao, Kajia III-844 Cao, Rongzeng III-1032, IV-129 Cao, Suosheng II-1067 Cao, Z.W. II-363 Carmichael, Gregory R. I-1018 Caron, David I-995 Catalyurek, Umit I-1213 Cattani, Carlo II-982, II-990, II-1004 Cecchini, Arnaldo I-567 ˇ Cepulkauskas, Algimantas II-259 Cetnarowicz, Krzysztof II-920 Cha, Jeong-won IV-721 Cha, JeongHee II-1 Cha, Seung-Jun II-562 Chai, Lei IV-98 Chai, Tianfeng I-1018

Chai, Yaohui II-409 Chai, Zhenhua I-802 Chai, Zhilei I-294 Chakraborty, Soham I-1042 Chandler, Seth J. II-170 Chandola, Varun I-1222 Chang, Ok-Bae II-1139 Chang, Jae-Woo III-621 Chang, Moon Seok IV-542 Chang, Sekchin IV-636 Chang, Yoon-Seop II-562 Chaoguang, Men III-166 Chatelain, Philippe III-1122 Chaturvedi, Alok I-1106 Chawla, Nitesh V. I-1090 Che, HaoYang III-293 Chen, Bin III-653 Chen, Bing III-338 Chen, Changbo II-268 Chen, Chun I-168, I-684 Chen, Gang I-253, I-261, III-1188 Chen, Guangjuan III-984 Chen, Guoliang I-700 Chen, Jianjun I-318 Chen, Jianzhong I-17 Chen, Jiawei IV-59, IV-98 Chen, Jin I-30 Chen, Jing III-669 Chen, Juan IV-921 Chen, Ken III-555 Chen, Lei IV-1124 Chen, Ligang I-318 Chen, Liujun IV-59 Chen, Long IV-1186 Chen, Qingshan II-482 Chen, Tzu-Yi I-302 Chen, Weijun I-192 Chen, Wei Qing II-736 Chen, Xiao IV-644 Chen, Xinmeng I-418 Chen, Ying I-575 Chen, Yun-ping III-1012 Chen, Yuquan II-1186, II-1214 Chen, Zejun III-113 Chen, Zhengxin III-852, III-874 Chen, Zhenyu II-431 Cheng, Frank II-17 Cheng, Guang IV-857 Cheng, Jingde I-406, III-890 Cheng, T.C. Edwin III-338

Author Index Cheng, Xiaobei III-90 Chi, Hongmei I-723 Cho, Eunseon IV-713 Cho, Haengrae IV-753 Cho, Hsung-Jung IV-275 Cho, Jin-Woong IV-482 Cho, Ki Hyung III-813 Cho, Sang-Young IV-949 Cho, Yongyun III-236 Cho, Yookun IV-905 Choe, Yoonsik IV-668 Choi, Bum-Gon IV-554 Choi, Byung-Uk IV-737 Choi, Han-Lim I-1138 Choi, Hyoung-Kee IV-360 Choi, HyungIl II-1 Choi, Jaeyoung III-236 Choi, Jongsun III-236 Choi, Kee-Hyun II-952 Choi, Myounghoi III-508 Choo, Hyunseung I-668, II-1226, III-432, III-465, IV-303, IV-336, IV-530, IV-534, IV-538, IV-550 Chopard, Bastien I-922 Chou, Chung-I IV-1163 Choudhary, Alok III-734 Chourasia, Amit I-46 Chrisochoides, Nikos I-980 Christiand II-760 Chtepen, Maria I-454 Chu, Chao-Hsien III-762 Chu, You-ling IV-1163 Chu, Yuan-Sun II-673 Chuan, Zheng Bo II-25 Chung, Hee-Joon II-347 Chung, Hyunsook II-696 Chung, Min Young IV-303, IV-534, IV-550, IV-554 Chung, Seungjong III-18 Chung, Tai-Myoung III-1024 Chung, Yoojin IV-949 Cianni, Nathalia M. III-253 Ciarlet Jr., Patrick IV-235 Cisternino, Antonio II-585 Claeys, Filip H.A. I-454 Clark, James S. I-988 Clatz, Olivier I-980 Clegg, June IV-18 Clercx, H.J.H. I-898 Cline, Alan II-1123

1257

Coen, Janice L. I-1042 Cofi˜ no, A.S. III-82 Cole, Martin J. I-1002 Cong, Guodong III-960 Constantinescu, Emil M. I-1018 Corcho, Oscar II-712 Cornish, Annita IV-18 Cortial, J. I-1171 Costa-Montenegro, Enrique IV-466 Costanti, Marco II-617 Cox, Simon J. III-273 Coyle, E. I-1074 Cuadrado-Gallego, J. II-1162 Cui, Gang IV-1021 Cui, Ruihai II-331 Cui, Yifeng I-46 Cui, Yong IV-817 Curcin, Vasa III-204 Cycon, Hans L. IV-761 D’Ambrosio, Donato I-866 D˘ aescu, Dacian I-1018 Dai, Dao-Qing I-102 Dai, Kui IV-251 Dai, Tran Thanh IV-590 Dai, Zhifeng IV-1171 Danek, Tomasz II-558 Danelutto, Marco II-585 Dang, Sheng IV-121 Dapeng, Tan IV-957 Darema, Frederica I-955 Darmanjian, Shalom I-964 Das, Abhimanyu I-995 Day, Steven I-46 Decyk, Viktor K. I-583 Degond, Pierre I-939 Delu, Zeng IV-283 Demeester, Piet I-454 Demertzi, Melina I-1230 Demkowicz, L. I-972 Deng, An III-1172 Deng, Nai-Yang III-669, III-882 Deng, Xin Guo II-736 Dhariwal, Amit I-995 Dhoedt, Bart I-454 Di, Zengru IV-98 D´ıaz-Zuccarini, V. I-794 DiGiovanna, Jack I-964 Diller, K.R. I-972 Dimov, Ivan I-731, I-739, I-747

1258

Author Index

Ding, Dawei III-347 Ding, Lixin IV-1061 Ding, Maoliang III-906 Ding, Wantao III-145 Ding, Wei III-1032, IV-129, IV-857 Ding, Yanrui I-294 Ding, Yong IV-1116 Ding, Yongsheng III-74 Ding, Yu I-1197 Diniz, Pedro I-1230 Dittamo, Cristian II-585 Doboga, Flavia II-1060 Dobrowolski, Grzegorz II-944 Dong, Jinxiang I-253, I-261, II-896, II-1115, III-1048, IV-725, IV-969 Dong, Yong IV-921 Dongarra, Jack II-815 Dongxin, Lu III-129 Dostert, Paul I-1002 Douglas, Craig C. I-1002, I-1042 Downar, T. I-1074 Dre˙zewski, Rafal II-904, II-920 Dressler, Thomas II-831 Du, Xu IV-873 Du, Ye III-141 Duan, Gaoyan IV-1091 Duan, Jianyong II-1186 Dunn, Adam G. I-762 Dupeyrat, Gerard. IV-506 Efendiev, Yalchin I-1002 Egorova, Olga II-65 Eilertson, Eric I-1222 Elliott, A. I-972 Ellis, Carla I-988 Emoto, Kento II-601 Engelmann, Christian II-784 Eom, Jung-Ho III-1024 Eom, Young Ik IV-542, IV-977 Ertoz, Levent I-1222 Escribano, Jes´ us II-227 Espy, Kimberly Andrew III-859 Ewing, Richard E. I-1002 Fabozzi, Frank J. III-937 Fairman, Matthew J. III-273 Falcone, Jean-Luc I-922 Fan, Hongli III-563 Fan, Ying IV-98 Fan, Yongkai III-579

Fang, F. II-415 Fang, Fukang IV-59 Fang, Hongqing IV-1186 Fang, Hua III-859 Fang, Li Na II-736 Fang, Lide II-1067 Fang, Liu III-1048 Fang, Yu III-653 Fang, Zhijun II-1037 Fang-an, Deng III-453 Farhat, C. I-1171 Farias, Antonio II-799 Fathy, M. IV-606 Fedorov, Andriy I-980 Fei, Xubo III-244 Fei, Yu IV-741 Feixas, Miquel II-105 Feng, Huamin I-374, II-1012, III-1, III-493 Feng, Lihua III-1056 Feng, Y. I-972 Feng, Yuhong I-398 Ferrari, Edward I-1098 Fidanova, Stefka IV-1084 Field, Tony I-111 Figueiredo, Renato I-964 Fischer, Rudolf I-144 Fleissner, Sebastian I-213 Flikkema, Paul G. I-988 Fl´ orez, Jorge II-166 Fortes, Jos A.B. I-964 Frausto-Sol´ıs, Juan II-370, IV-981 Freire, Ricardo Oliveira II-312 Frigerio, Francesco II-272 Frolova, A.A. I-850 Fu, Chong I-575 Fu, Hao III-1048 Fu, Qian I-160 Fu, Shujun I-490 Fu, Tingting IV-969 Fu, Xiaolong III-579 Fu, Yingfang IV-409 Fu, Zetian III-547 Fuentes, D. I-972 Fujimoto, R.M. I-1050 Fukushima, Masao III-937 F¨ urlinger, Karl II-815 Furukawa, Tomonari I-1180 Fyta, Maria I-786

Author Index Gallego, Samy I-939 G´ alvez, Akemi II-211 Gang, Fang Xin II-25 Gang, Yung-Jin IV-721 Gao, Fang IV-1021 Gao, Liang III-212 Gao, Lijun II-478 Gao, Rong I-1083 Gao, Yajie III-547 Garcia, Victor M. I-152 Gardner, Henry J. I-583 Garre, M. II-1162 Garˇsva, Gintautas II-439 Gautier, Thierry II-593 Gava, Fr´ed´eric I-611 Gawro´ nski, P. IV-43 Geiser, J¨ urgen I-890 Gelfand, Alan I-988 Georgieva, Rayna I-731 Gerndt, Michael II-815, II-847 Gerritsen, Charlotte II-888 Ghanem, Moustafa III-204 Ghattas, Omar I-1010 Gi, YongJae II-114 Gibson, Paul II-386 Gilbert, Anna C. I-1230 Goble, Carole II-712, III-182 Goda, Shinichi IV-142 Goderis, Antoon III-182 Goey, L.P.H. de I-947 Golby, Alexandra I-980 Goldberg-Zimring, Daniel I-980 Golubchik, Leana I-995 Gombos, Daniel I-1138 G´ omez-Tato, A. III-637 Gong, Jian IV-809 Gong, Jianhua III-516, III-563 Gonz´ alez-Casta˜ no, Francisco J. III-637, IV-466 Gonzalez, Marta I-1090 Gore, Ross I-1238 Goto, Yuichi I-406 Gould, Michael II-138 Govindan, Ramesh I-995 Grama, Ananth I-1205 Gregor, Douglas I-620 Gregorio, Salvatore Di I-866 Gu, Guochang III-50, III-90, III-137, III-157, III-178 Gu, Hua-Mao III-591

1259

Gu, Jifa IV-9 Gu, Jinguang II-728 Gu, Yanying IV-312 Guan, Ying I-270 Guang, Li III-166 Guang-xue, Yue IV-741 Guensler, R. I-1050 Guermazi, Radhouane III-773 Guibas, L.J. I-1171 Guo, Bo IV-202 Guo, Jiangyan III-370 Guo, Jianping II-538, II-569 Guo, Song III-137 Guo, Yan III-1004 Guo, Yike III-204 Guo, Zaiyi I-119 Guo, Zhaoli I-802, I-810 Gurov, T. I-739 Guti´errez, J.M. III-82 Gyeong, Gyehyeon IV-977 Ha, Jong-Sung II-154 Ha, Pan-Bong IV-721 Haase, Gundolf I-1002 Habala, Ondrej III-265 Hachen, David I-1090 Haffegee, Adrian II-744, II-768 Hagiwara, Ichiro II-65 Hall, Mary W. I-1230 Hamadou, Abdelmajid Ben III-773 Hammami, Mohamed III-773 Hammond, Kevin II-617 Han, Houde IV-267 Han, Hyuck II-577, III-26, IV-705 Han, Jianjun I-426, IV-965 Han, Jinshu II-1091 Han, Ki-Joon I-692, II-511 Han, Ki-Jun IV-574 Han, Kyungsook I-78, I-94, II-339 Han, Lu IV-598 Han, Mi-Ryung II-347 Han, Qi-ye III-1012 Han, SeungJo III-829, IV-717 Han, Shoupeng I-1246 Han, Yehong III-444 Han, Youn-Hee IV-441 Han, Young-Ju III-1024 Han, Yuzhen III-911 Han, Zhangang IV-98 Hansen, James I-1138

1260

Author Index

Hao, Cheng IV-1005 Hao, Zhenchun IV-841 Hao, Zhifeng IV-1167 Hasan, M.K. I-326 Hasegawa, Hiroki I-914 Hatcher, Jay I-1002, I-1042 Hazle, J. I-972 He, Gaiyun II-1075 He, Jing II-401, II-409 He, Jingsha IV-409 He, Kaijian I-554, III-925 He, Tingting III-587 He, Wei III-1101 He, X.P. II-1083 He, Yulan II-378 He, Zhihong III-347 Heijst, G.J.F. van I-898 Hermer-Vazquez, Linda I-964 Hertzberger, Bob III-191 Hieu, Cao Trong IV-474 Hill, Chris I-1155, I-1163 Hill, Judith I-1010 Hinsley, Wes I-111 Hiroaki, Deguchi II-243 Hirose, Shigenobu I-914 Hluchy, Ladislav III-265 Hobbs, Bruce I-62 Hoekstra, Alfons G. I-922 Hoffmann, C. I-1074 Holloway, America I-302 Honavar, Vasant I-1066 Hong, Choong Seon IV-474, IV-590 Hong, Dong-Suk II-511 Hong, Helen II-9 Hong, Jiman IV-905, IV-925, IV-933 Hong, Soon Hyuk III-523, IV-425 Hong, Weihu III-1056 Hong, Yili I-1066 Hongjun, Yao III-611 Hongmei, Liu I-648 Hor´ ak, Bohumil II-936 Hose, D.R. I-794 Hou, Jianfeng III-313, III-320, III-448 Hou, Wenbang III-485 Hou, Y.M. III-1164 How, Jonathan I-1138 Hsieh, Chih-Hui I-1106 Hu, Bai-Jiong II-1012 Hu, Jingsong I-497 Hu, Qunfang III-1180

Hu, Ting IV-1029 Hu, Xiangpei IV-218 Hu, Xiaodong III-305 Hu, Yanmei I-17 Hu, Yi II-1186, II-1214 Hu, Yincui II-569 Hu, Yuanfang I-46 Hua, Chen Qi II-25 Hua, Kun III-867 Huajian, Zhang III-166 Huan, Zhengliang II-1029 Huang, Chongfu III-1016, III-1069 Huang, Dashan III-937 Huang, Fang II-523 Huang, Han IV-1167 Huang, Hong-Wei III-1114, III-1180 Huang, Houkuan III-645 Huang, Jing III-353 Huang, Kedi I-1246 Huang, LaiLei IV-90 Huang, Lican III-228 Huang, Linpeng II-1107 Huang, Maosong III-1105 Huang, Minfang IV-218 Huang, Mingxiang III-516 Huang, Peijie I-430 Huang, Wei II-455, II-486 Huang, Yan-Chu IV-291 Huang, Yong-Ping III-125 Huang, Yu III-257 Huang, Yue IV-1139 Huang, Z.H. II-1083 Huang, Zhou III-653 Huashan, Guo III-611 Huerta, Joaquin II-138 Huh, Eui Nam IV-498, IV-582 Huh, Moonhaeng IV-889 Hui, Liu II-130 Hunter, M. I-1050 Hur, Gi-Taek II-150 Hwang, Chih-Hong IV-227 Hwang, Hoyoung IV-889, IV-897 Hwang, Jun IV-586 Hwang, Yuan-Chu IV-433 Hwang, Yun-Young II-562 Ibrahim, H. I-446 Iglesias, Andres II-89, II-194, II-235 Inceoglu, Mustafa Murat III-607 Ipanaqu´e, R. II-194

Author Index Iskandarani, ˙ sler, Veysi I¸ ˙ Inan, Asu Ito, Kiichi

Mohamed II-49 I-1, I-38 IV-74

I-1002

Jackson, Peter III-746 Jacob, Robert L. I-931 Jagannathan, Suresh I-1205 Jagodzi´ nski, Janusz II-558 Jaluria, Y. I-1189 Jamieson, Ronan II-744 Jang, Hyun-Su IV-542 Jang, Sung Ho II-966 Jayam, Naresh I-603 Jeon, Jae Wook III-523, IV-425 Jeon, Keunhwan III-508 Jeon, Taehyun IV-733 Jeong, Chang Won III-170 Jeong, Dongwon II-720, III-508, IV-441 Jeong, Seung-Moon II-150 Jeong, Taikyeong T. IV-586 Jeun, In-Kyung II-665 Jho, Gunu I-668 Ji, Hyungsuk II-1222, II-1226 Ji, Jianyue III-945 Ji, Youngmin IV-869 Jia, Peifa II-956 Jia, Yan III-717, III-742 Jian, Kuodi II-855 Jian-fu, Shao III-1130 Jiang, Changjun III-220 Jiang, Dazhi IV-1131 Jiang, Hai I-286 Jiang, He III-293, III-661 Jiang, Jianguo IV-1139 Jiang, Jie III-595 Jiang, Keyuan II-393 Jiang, Liangkui IV-186 Jiang, Ming-hui IV-158 Jiang, Ping III-212 Jiang, Shun IV-129 Jiang, Xinlei III-66 Jiang, Yan III-42 Jiang, Yi I-770, I-826 Jianjun, Guo III-611 Jianping, Li III-992 Jiao, Chun-mao III-1197 Jiao, Licheng IV-1053 Jiao, Xiangmin I-334 Jiao, Yue IV-134

Jin, Hai I-434 Jin, Ju-liang III-980, III-1004 Jin, Kyo-Hong IV-721 Jin, Li II-808 Jin, Shunfu IV-210, IV-352 Jing, Lin-yan III-1004 Jing, Yixin II-720 Jing-jing, Tian III-453 Jinlong, Zhang III-953 Jo, Geun-Sik II-704 Jo, Insoon II-577 Johnson, Chris R. I-1002 Jolesz, Ferenc I-980 Jones, Brittany I-237 Joo, Su Chong III-170 Jordan, Thomas I-46 Jou, Yow-Jen IV-291 Jung, Hyungsoo III-26, IV-705 Jung, Jason J. II-704 Jung, Kwang-Ryul IV-745 Jung, Kyunghoon IV-570 Jung, Soon-heung IV-621 Jung, Ssang-Bong IV-457 Jung, Woo Jin IV-550 Jung, Youngha IV-668 Jurenz, Matthias II-839 Kabadshow, Ivo I-716 Kacher, Dan I-980 Kakehi, Kazuhiko II-601 Kalaycı, Tahir Emre II-158 Kambadur, Prabhanjan I-620 Kanaujia, Atul I-1114 Kaneko, Masataka II-178 Kang, Dazhou I-196 Kang, Hong-Koo I-692, II-511 Kang, Hyungmo IV-514 Kang, Lishan IV-1116, IV-1131 Kang, Mikyung IV-401 Kang, Min-Soo IV-449 Kang, Minseok III-432 Kang, Sanggil III-836 Kang, Seong-Goo IV-977 Kang, Seung-Seok IV-295 Kapcak, Sinan II-235 Kapoor, Shakti I-603 Karakaya, Ziya II-186 Karl, Wolfgang II-831 Kasprzak, Andrzej I-442 Kawano, Akio I-914

1261

1262

Author Index

Kaxiras, Efthimios I-786 Ke, Lixia III-911 Keetels, G.H. I-898 Kempe, David I-995 Kennedy, Catriona I-1098 Kereku, Edmond II-847 Khan, Faraz Idris IV-498, IV-582 Khazanchi, Deepak III-806, III-852 Khonsari, A. IV-606 Ki, Hyung Joo IV-554 Kikinis, Ron I-980 Kil, Min Wook IV-614 Kim, Deok-Hwan I-204 Kim, Beob Kyun III-894 Kim, Byounghoon IV-570 Kim, Byung-Ryong IV-849 Kim, ByungChul IV-368 Kim, ChangKug IV-328 Kim, Changsoo IV-570 Kim, Cheol Min III-559 Kim, Chul-Seung IV-542 Kim, Deok-Hwan I-204, II-515, III-902 Kim, Do-Hyeon IV-449 Kim, Dong-Oh I-692, II-511 Kim, Dong-Uk II-952 Kim, Dong-Won IV-676 Kim, Eung-Kon IV-717 Kim, Gu Su IV-542 Kim, GyeYoung II-1 Kim, H.-K. I-1050 Kim, Hanil IV-660 Kim, Hojin IV-865 Kim, Hyogon IV-709 Kim, Hyun-Ki IV-457, IV-1076 Kim, Jae-gon IV-621 Kim, Jae-Kyung III-477 Kim, Jee-Hoon IV-344, IV-562 Kim, Ji-Hong IV-721 Kim, Jihun II-347 Kim, Jinhwan IV-925 Kim, Jinoh I-1222 Kim, Jong-Bok II-1194 Kim, Jong Nam III-10, III-149 Kim, Jong Tae IV-578 Kim, Joongheon IV-385 Kim, Joung-Joon I-692 Kim, Ju Han II-347 Kim, Jungmin II-696 Kim, Junsik IV-713 Kim, Kanghee IV-897

Kim, Ki-Chang IV-849 Kim, Ki-Il IV-745 Kim, Kilcheon IV-417 Kim, Kwan-Woong IV-328 Kim, Kyung-Ok II-562 Kim, LaeYoung IV-865 Kim, Minjeong I-1042 Kim, Moonseong I-668, III-432, III-465 Kim, Myungho I-382 Kim, Nam IV-713 Kim, Pankoo III-829, IV-660, IV-925 Kim, Sang-Chul IV-320 Kim, Sang-Sik IV-745 Kim, Sang-Wook IV-660 Kim, Sanghun IV-360 Kim, Sangtae I-963 Kim, Seong Baeg III-559 Kim, Seonho I-1222 Kim, Shingyu III-26, IV-705 Kim, Sung Jin III-798 Kim, Sungjun IV-869 Kim, Sung Kwon IV-693 Kim, Sun Yong IV-360 Kim, Tae-Soon III-902 Kim, Taekon IV-482 Kim, Tai-Hoon IV-693 Kim, Ung Mo III-709 Kim, Won III-465 Kim, Yong-Kab IV-328 Kim, Yongseok IV-933 Kim, Young-Gab III-1040 Kim, Young-Hee IV-721 Kisiel-Dorohinicki, Marek II-928 Kitowski, Jacek I-414 Kleijn, Chris R. I-842 Klie, Hector I-1213 Kluge, Michael II-823 Knight, D. I-1189 Kn¨ upfer, Andreas II-839 Ko, Il Seok IV-614, IV-729 Ko, Jin Hwan I-521 Ko, Kwangsun IV-977 Koda, Masato II-447 Koh, Kern IV-913 Kolobov, Vladimir I-850, I-858 Kondo, Djimedo III-1130 Kong, Chunum IV-303 Kong, Xiangjie II-1067 Kong, Xiaohong I-278 Kong, Yinghui II-978

Author Index Kong, Youngil IV-685 Koo, Bon-Wook IV-562 Koo, Jahwan IV-538 Korkhov, Vladimir III-191 Kot, Andriy I-980 Kotulski, Leszek II-880 Kou, Gang III-852, III-874 Koumoutsakos, Petros III-1122 Ko´zlak, Jaroslaw II-872, II-944 Krile, Srecko I-628 Krishna, Murali I-603 Kr¨ omer, Pavel II-936 Kryza, Bartosz I-414 Krzhizhanovskaya, Valeria V. I-755 Kuang, Minyi IV-82 Kuijk, H.A.J.A. van I-947 Kulakowski, K. IV-43 Kulikov, Gennady Yu. I-136 Kulvietien˙e, Regina II-259 Kulvietis, Genadijus II-259 Kumar, Arun I-603 Kumar, Vipin I-1222 Kurc, Tahsin I-1213 Kusano, Kanya I-914 K¨ uster, Uwe I-128 Kuszmaul, Bradley C. I-1163 Kuzumilovic, Djuro I-628 Kwak, Ho Young IV-449 Kwak, Sooyeong IV-417 Kwoh, Chee Keong II-378 Kwon, B. I-972 Kwon, Hyuk-Chul II-1170, II-1218 Kwon, Key Ho III-523, IV-425 Kwon, Ohhoon IV-913 Kwon, Ohkyoung II-577 Kyriakopoulos, Fragiskos II-625 Laat, Cees de III-191 Laclavik, Michal III-265 Lagan` a, Antonio I-358 Lai, C.-H. I-294 Lai, Hong-Jian III-377 Lai, K.K. III-917 Lai, Kin Keung I-554, II-423, II-455, II-486, II-494, III-925, IV-106 Landertshamer, Roland II-752, II-776 Lang, Bruno I-716 Lantz, Brett I-1090 Larson, J. Walter I-931 Laserra, Ettore II-997

1263

Laszewski, Gregor von I-1058 Lawford, P.V. I-794 Le, Jiajin III-629 Lee, Bong Gyou IV-685 Lee, Byong-Gul II-1123 Lee, Chang-Mog II-1139 Lee, Changjin IV-685 Lee, Chung Sub III-170 Lee, Donghwan IV-385 Lee, Edward A. III-182 Lee, Eun-Pyo II-1123 Lee, Eung Ju IV-566 Lee, Eunryoung II-1170 Lee, Eunseok IV-594 Lee, Haeyoung II-73 Lee, Heejo IV-709 Lee, HoChang II-162 Lee, Hyun-Jo III-621 Lee, Hyungkeun IV-482 Lee, In-Tae IV-1076 Lee, Jae-Hyung IV-721 Lee, Jaeho III-477 Lee, Jaewoo IV-913 Lee, JaeYong IV-368 Lee, Jang-Yeon IV-482 Lee, Jin-won IV-621 Lee, Jong Sik II-966 Lee, Joonhyoung IV-668 Lee, Ju-Hong II-515, III-902 Lee, Jung-Bae IV-949 Lee, Jung-Seok IV-574 Lee, Junghoon IV-401, IV-449, I-586, IV-660, IV-925 Lee, Jungwoo IV-629 Lee, K.J. III-701 Lee, Kye-Young IV-652 Lee, Kyu-Chul II-562 Lee, Kyu Min II-952 Lee, Kyu Seol IV-566 Lee, Mike Myung-Ok IV-328 Lee, Namkyung II-122 Lee, Peter I-1098 Lee, Samuel Sangkon II-1139, III-18 Lee, Sang-Yun IV-737 Lee, SangDuck IV-717 Lee, Sang Ho III-798 Lee, Sang Joon IV-449 Lee, Seok-Lae II-665 Lee, Seok-Lyong I-204 Lee, SeungCheol III-709

1264

Author Index

Lee, Seungwoo IV-905 Lee, Seung Wook IV-578 Lee, Soojung I-676 Lee, SuKyoung IV-865 Lee, Sungyeol II-73 Lee, Tae-Jin IV-336, IV-457, IV-550, IV-554 Lee, Wan Yeon IV-709 Lee, Wonhee III-18 Lee, Wonjun IV-385 Lee, Young-Ho IV-897 Lee, Younghee IV-629 Lei, Lan III-381, III-384 Lei, Tinan III-575 Lei, Y.-X. IV-777 Leier, Andr´e I-778 Leiserson, Charles E. I-1163 Lemaire, Fran¸cois II-268 Lenton, Timothy M. III-273 Leung, Kwong-Sak IV-1099 Levnaji´c, Zoran II-633 Li, Ai-Ping III-121 Li, Aihua II-401, II-409 Li, Changyou III-137 Li, Dan IV-817, IV-841 Li, Deng-Xin III-377 Li, Deyi II-657 Li, Fei IV-785 Li, Gen I-474 Li, Guojun III-347 Li, Guorui IV-409 Li, Haiyan IV-961 Li, Hecheng IV-1159 Li, Jianping II-431, II-478, III-972 Li, Jinhai II-1067 Li, Jun III-906 Li, Li III-984 Li, Ling II-736 Li, Ming I-374, II-1012, III-1, III-493 Li, MingChu III-293, III-329 Li, Ping III-440 Li-ping, Chen IV-741 Li, Qinghua I-426, IV-965 Li, Renfa III-571 Li, Rui IV-961 Li, Ruixin III-133 Li, Runwu II-1037 Li, Sai-Ping IV-1163 Li, Shanping IV-376 Li, Shengjia III-299

Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li, Li,

Shucai III-145 Tao IV-166 Weimin III-629 Wenhang III-516 X.-M. II-397 Xiao-Min III-381, III-384 Xikui III-1210 Xin II-251, III-587 Xing IV-701, IV-853 Xingsen III-781, III-906 Xinhui IV-174 Xinmiao IV-174 Xinye III-531 Xiong IV-121 Xiuzhen II-1021 Xue-Yao III-125, III-174 Xuening II-1186 Xueyu II-978 Xuezhen I-430 Yan III-603 Yanhui I-196 Yi III-485 Yih-Lang IV-259 Yiming IV-227, IV-259 Ying II-1115 Yixue II-363 Yiyuan II-1115 Yong III-50, III-157, IV-251 Yuan I-1066 Yuanxiang IV-997, IV-1037, IV-1124,IV-1171, IV-1175, IV-1179 Li, Yueping III-401 Li, Yun II-327 Li, Yunwei IV-598 Li, Zhiguo I-1114 Li, Zhiyong IV-1183 Li, Zi-mao IV-1045, IV-1147 Liang, L. III-988 Liang, Liang IV-202 Liang, Xiaodong III-334 Liang, Yi I-318 Liang, Yong IV-1099 Lim, Eun-Cheon III-821 Lim, Gyu-Ho IV-721 Lim, Jongin III-1040 Lim, Kyung-Sup II-1194 Lim, S.C. I-374, II-1012, III-1 Lim, Sukhyun I-505 Lim, Sung-Soo IV-889, IV-897 Lin, Jun III-579

Author Index Lin, Yachen II-470 Lin, Zefu III-945 Lin, Zhun II-1178 Lin-lin, Ci III-539 Li˜ na ´n-Garc´ıa, Ernesto II-370 Ling, Yun III-591 Linton, Steve II-617 Liu, Caiming II-355, IV-166 Liu, Dayou I-160 Liu, Dingsheng II-523 Liu, Dong IV-961 Liu, Dongtao IV-701 Liu, E.L. III-1151 Liu, Fang IV-1053 Liu, Feiyu III-1188 Liu, Fengli III-762 Liu, Fengshan II-33 Liu, Gan I-426, IV-965 Liu, Guizhen III-313, III-320, III-362, III-440, III-457 Liu, Guobao II-97 Liu, Guoqiang III-1062 Liu, Haibo III-90, III-178 Liu, Hailing III-1205 Liu, Han-long III-1172 Liu, Hong III-329 Liu, Hong-Cheu I-270 Liu, Hongwei IV-1021 Liu, Jia I-168, III-677 Liu, Jiangguo (James) I-882 Liu, Jiaoyao IV-877 Liu, Jin-lan III-1008 Liu, Li II-17 Liu, Liangxu III-629 Liu, Lin III-980 Liu, Lingxia III-133 Liu, Ming III-1105 Liu, Peng II-523, II-896, IV-969 Liu, Qingtang III-587 Liu, Qizhen III-575 Liu, Quanhui III-347 Liu, Sheng IV-1068 Liu, Tianzhen III-162 Liu, Weijiang IV-793 Liu, Xiaojie II-355, IV-166 Liu, Xiaoqun IV-841 Liu, Xin II-657 Liu, Xinyu I-1238 Liu, Xinyue III-661 Liu, Xiuping II-33

Liu, Yan IV-59 Liu, Yijun IV-9 Liu, Ying III-685, III-781, IV-18 Liu, Yingchun III-1205 Liu, Yuhua III-153 Liu, Yunling IV-162 Liu, Zejia III-1210 Liu, Zhen III-543 Liu, Zhi III-595 Liu, Zongtian II-689 Lobo, Victor II-542 Lodder, Robert A. I-1002 Loidl, Hans-Wolfgang II-617 Loop, B. I-1074 Lord, R. II-415 Lorenz, Eric I-922 Lou, Dingjun III-401, III-410 Loureiro, Miguel II-542 Lu, Feng III-587 Lu, J.F. III-1228 Lu, Jianjiang I-196 Lu, Jianjun IV-162 Lu, Ruzhan II-1186, II-1214 Lu, Shiyong III-244 L¨ u, Shunying I-632 Lu, Weidong IV-312 Lu, Yi I-1197 Lu, Yunting III-410 Lu, Zhengding II-808 Lu, Zhengtian II-355 Lu, Zhongyu III-754 Luengo, F. II-89 L  ukasik, Szymon III-726 Lumsdaine, Andrew I-620 Luo, Qi III-531, III-583 Luo, Xiaonan III-485 Luo, Ying II-538, II-569 Lv, Tianyang II-97 Ma, Q. I-1189 Ma, Tieju IV-1 Ma, Xiaosong I-1058 Ma, Yinghong III-444 Ma, Yongqiang III-898 Ma, Zhiqiang III-133 Macˆedo, Autran III-281 Madey, Gregory R. I-1090 Maechling, Philip I-46 Maeno, Yoshiharu IV-74 Mahinthakumar, Kumar I-1058

1265

1266

Author Index

Mahmoudi, Babak I-964 Majer, Jonathan D. I-762 Majewska, Marta I-414 Majumdar, Amitava I-46 Malekesmaeili, Mani IV-490 Malony, Allen I-86 Mandel, Jan I-1042 Mao, Cunli IV-598 Marchal, Loris I-964 Marin, Mauricio I-229 Markowski, Marcin I-442 Marques, Vin´ıcius III-253 Marsh, Robert III-273 Marshall, John I-1155, I-1163 ´ Mart´ınez-Alvarez, R.P. III-637 Martino, Rafael N. De III-253 Martinoviˇc, Jan II-936 Mascagni, Michael I-723 Matsuzaki, Kiminori II-601, II-609 Mattos, Amanda S. de III-253 Matza, Jeff III-852 Maza, Marc Moreno II-251, II-268 McCalley, James I-1066 McGregor, Robert I-906 McMullan, Paul I-538 Mechitov, Alexander II-462 Meeker, William I-1066 M´ehats, Florian I-939 Mei, Hailiang III-424 Meire, Silvana G. II-138 Melchionna, Simone I-786 Meliopoulos, S. I-1074 Melnik, Roderick V.N. I-834 Memarsadeghi, Nargess II-503 Memik, Gokhan III-734 Meng, Fanjun II-478 Meng, Huimin III-66 Meng, Jixiang III-334 Meng, Wei III-299 Meng, Xiangyan IV-598 Merkeviˇcius, Egidijus II-439 Metaxas, Dimitris I-1114 Miao, Jia-Jia III-121 Miao, Qiankun I-700 Michopoulos, John G. I-1180 Mikucioniene, Jurate II-259 Min, Jun-Ki I-245 Min, Sung-Gi IV-441 Ming, Ai IV-522 Minster, Bernard I-46

Mirabedini, Seyed Javad II-960 Missier, Paolo II-712 Mok, Tony Shu Kam IV-1099 Molinari, Marc III-273 Montanari, Luciano II-272 Monteiro Jr., Pedro C.L. III-253 Moon, Jongbae I-382 Moon, Kwang-Seok III-10 Moore, Reagan I-46 Mora, P. III-1156 Morimoto, Shoichi II-1099, III-890 Morozov, I. III-199 Morra, Gabriele III-1122 Moshkovich, Helen II-462 Mount, David M. II-503 Mu, Chengpo I-490 Mu, Weisong III-547 Mulder, Wico III-216 Mun, Sung-Gon IV-538 Mun, Youngsong I-660, IV-514 M¨ uller, Matthias II-839 Munagala, Kamesh I-988 Muntean, Ioan Lucian I-708 Murayama, Yuji II-550 Nagel, Wolfgang E. II-823, II-839 Nah, HyunChul II-162 Nakajima, Kengo III-1085 Nakamori, Yoshiteru IV-1 Nam, Junghyun III-709 Nara, Shinsuke I-406 Narayanan, Ramanathan III-734 Narracott, A.J. I-794 Nawarecki, Edward II-944 Nedjalkov, M. I-739 Nepal, Chirag I-78, I-94 Ni, Jun III-34 Nicole, Denis A. III-273 Niemegeers, Ignas IV-312 Niennattrakul, Vit I-513 Nieto-Y´ an ˜ez, Alma IV-981 Ning, Zhuo IV-809 Niu, Ben II-319 Niu, Ke III-677 Niu, Wenyuan IV-9 Noh, Dong-Young II-347 Nong, Xiao IV-393 Noorbatcha, I. II-335 Norris, Boyana I-931

Author Index Oberg, Carl I-995 Oden, J.T. I-972 Oh, Hyukjun IV-933 Oh, Jehwan IV-594 Oh, Sangchul IV-713 Oh, Sung-Kwun IV-1076, IV-1108 Ohsawa, Yukio IV-74, IV-142 Oijen, J.A. van I-947 Oladunni, Olutayo O. I-176 Oliveira, Suely I-221 Olsen, Kim I-46 Olson, David L. II-462 Ong, Everest T. I-931 Ong, Hong II-784 Oosterlee, C.W. II-415 Ord, Alison I-62 Othman, M. I-326, I-446 Ou, Zhuoling III-162 Ould-Khaoua, M. IV-606 Ouyang, Song III-289 ¨ . ıkyılmaz, Berkin III-734 Ozıs Pacifici, Leonardo I-358 Paik, Juryon III-709 Palkow, Mark IV-761 Pan, Wei II-268 Pan, Yaozhong III-1069 Pang, Jiming II-97 Pang, Yonggang III-117, III-141 Papancheva, Rumyana I-747 Parashar, Manish I-1213 Parhami, Behrooz IV-67 Park, Ae-Soon IV-745 Park, Byungkyu II-339 Park, Chiwoo I-1197 Park, Dong-Hyun IV-344 Park, Gyung-Leen IV-449, IV-586, IV-660, IV-925 Park, Hee-Geun II-1222 Park, Heum II-1218 Park, Hyungil I-382 Park, Ilkwon IV-546 Park, Jaesung IV-629 Park, Jeonghoon IV-336 Park, Ji-Hwan III-523, IV-425 Park, JongAn III-829, IV-717 Park, Keon-Jun IV-1108 Park, ManKyu IV-368 Park, Moonju IV-881 Park, Mu-Hun IV-721

Park, Namhoon IV-713 Park, Sanghun I-25 Park, Seon-Ho III-1024 Park, Seongjin II-9 Park, So-Jeong IV-449 Park, Sooho I-1138 Park, Sungjoon III-836 Park, TaeJoon IV-368 Park, Woojin IV-869 Park, Youngsup II-114 Parsa, Saeed I-599 Paszy´ nski, M. I-342, II-912 Pathak, Jyotishman I-1066 Pawling, Alec I-1090 Pedrycz, Witold IV-1108 Pei, Bingzhen II-1214 Pei-dong, Zhu IV-393 Pein, Raoul Pascal III-754 Peiyu, Li IV-957 Peng, Dongming III-859 Peng, Hong I-430, I-497 Peng, Lingxi II-355, IV-166 Peng, Qiang II-57 Peng, Shujuan IV-997 Peng, Xia III-653 Peng, Xian II-327 Peng, Yi III-852, III-874 Peng, Yinqiao II-355 Pfl¨ uger, Dirk I-708 Pinheiro, Wallace A. III-253 Plale, Beth I-1122 Platoˇs, Jan II-936 Prasad, R.V. IV-312 Price, Andrew R. III-273 Primavera, Leonardo I-9 Prudhomme, S. I-972 Pr´ıncipe, Jos´e C. I-964 Pu, Liang III-867 Pusca, Stefan II-1053 Qi, Jianxun III-984 Qi, Li I-546 Qi, Meibin IV-1139 Qi, Shanxiang I-529 Qi, Yutao IV-1053 Qiao, Daji I-1066 Qiao, Jonathan I-237 Qiao, Lei III-615 Qiao, Yan-Jiang IV-138 Qin, Jun IV-1045

1267

1268

Author Index

Qin, Ruiguo III-599 Qin, Ruxin III-669 Qin, Xiaolin II-1131 Qin, Yong IV-67, IV-1167 Qiu, Guang I-684 Qiu, Jieshan II-280 Qiu, Yanxia IV-598 Qizhi, Zhu III-1130 Queiroz, Jos´e Rildo de Oliveira Quir´ os, Ricardo II-138

Ruan, Jian IV-251 Ruan, Qiuqi I-490 Ruan, Youlin I-426, IV-965 Ryan, Sarah I-1066 Ryu, Jae-hong IV-676 Ryu, Jihyun I-25 Ryu, Jung-Pil IV-574 Ryu, Kwan Woo II-122 II-304

Ra, Sang-Dong II-150 Rafirou, D. I-794 Rajashekhar, M. I-1171 Ram, Jeffrey III-244 Ramakrishnan, Lavanya I-1122 Ramalingam, M. II-288 Ramasami, K. II-288 Ramasami, Ponnadurai II-296 Ramsamy, Priscilla II-744, II-768 Ranjithan, Ranji I-1058 Ratanamahatana, Chotirat Ann I-513 Rattanatamrong, Prapaporn I-964 Ravela, Sai I-1147, I-1155 Regenauer-Lieb, Klaus I-62 Rehn, Veronika I-366 Rejas, R. II-1162 ReMine, Walter II-386 Ren, Lihong III-74 Ren, Yi I-462, I-466, II-974 Ren, Zhenhui III-599 Reynolds Jr., Paul F. I-1238 Richman, Michael B. I-1130 Rigau, Jaume II-105 Robert, Yves I-366, I-591 Roberts, Ron I-1066 Roch, Jean-Louis II-593 Rocha, Gerd Bruno II-312 Rodr´ıguez, D. II-1162 Rodr´ıguez-Hern´ andez, Pedro S. III-637, IV-466 Rom´ an, E.F. II-370 Romero, David II-370 Romero, Luis F. I-54 Rong, Haina IV-243, IV-989 Rong, Lili IV-178 Rongo, Rocco I-866 Rossman, T. I-1189 Roy, Abhishek I-652 Roy, Nicholas I-1138

Sabatka, Alan III-852 Safaei, F. IV-606 Sainz, Miguel A. II-166 Salman, Adnan I-86 Saltz, Joel I-1213 Sameh, Ahmed I-1205 San-Mart´ın, D. III-82 Sanchez, Justin C. I-964 S´ anchez, Ruiz Luis M. II-1004 Sandu, Adrian I-1018, I-1026 Sanford, John II-386 Santone, Adam I-1106 Sarafian, Haiduke II-203 Sarafian, Nenette II-203 Savchenko, Maria II-65 Savchenko, Vladimir II-65 Saxena, Navrati I-652 Sbert, Mateu II-105, II-166 Schaefer, R. I-342 Scheuermann, Peter III-781 Schmidt, Thomas C. IV-761 Schoenharl, Timothy I-1090 ´ Schost, Eric II-251 Schwan, K. I-1050 Scott, Stephen L. II-784 Seinfeld, John H. I-1018 Sekiguchi, Masayoshi II-178 Senel, M. I-1074 Senthilkumar, Ganapathy I-603 Seo, Dong Min III-813 Seo, Kwang-deok IV-621 Seo, SangHyun II-114, II-162 Seo, Young-Hoon II-1202, II-1222 Seshasayee, B. I-1050 Sha, Jing III-220 Shakhov, Vladimir V. IV-530 Shan, Jiulong I-700 Shan, Liu III-953 Shan-shan, Li IV-393 Shang, Weiping III-305 Shanzhi, Chen IV-522

Author Index Shao, Feng I-253 Shao, Huagang IV-644 Shao, Xinyu III-212 Shao, Ye-Hong III-377 Shao-liang, Peng IV-393 Sharif, Hamid III-859 Sharma, Abhishek I-995 Sharma, Raghunath I-603 Shen, Huizhang IV-51 Shen, Jing III-90, III-178 Shen, Linshan III-1077 Shen, Xianjun IV-1171, IV-1175, IV-1179 Shen, Yue III-109, III-555 Shen, Zuyi IV-1186 Shi, Baochang I-802, I-810, I-818 Shi, Bing III-615 Shi, Dongcai II-1115 Shi, Haihe III-469 Shi, Huai-dong II-896 Shi, Jin-Qin III-591 Shi, Xiquan II-33 Shi, Xuanhua I-434 Shi, Yaolin III-1205 Shi, Yong II-401, II-409, II-490, II-499, III-685, III-693, III-852, III-874, III-906, III-1062 Shi, Zhongke I-17 Shi-hua, Ma I-546 Shim, Choon-Bo III-821 Shima, Shinichiro I-914 Shin, Byeong-Seok I-505 Shin, Dong-Ryeol II-952 Shin, In-Hye IV-449, IV-586, IV-925 Shin, Jae-Dong IV-693 Shin, Jitae I-652 Shin, Kwonseung IV-534 Shin, Kyoungho III-236 Shin, Seung-Eun II-1202, II-1222 Shin, Teail IV-514 Shin, Young-suk II-81 Shindin, Sergey K. I-136 Shirayama, Susumu II-649 Shiva, Mohsen IV-490 Shouyang, Wang III-917 Shuai, Dianxun IV-1068 Shuang, Kai IV-785 Shukla, Pradyumn Kumar I-310, IV-1013 Shulin, Zhang III-992

Shuping, Wang III-992 Silva, Geraldo Magela e II-304 Simas, Alfredo Mayall II-312 Simmhan, Yogesh I-1122 Simon, Gyorgy I-1222 Simutis, Rimvydas II-439 Siricharoen, Waralak V. II-1155 Sirichoke, J. I-1050 Siwik, Leszek II-904 Siy, Harvey III-790 Skelcher, Chris I-1098 Skomorowski, Marek II-970 Slota, Damian I-184 Sn´ aˇsel, V´ aclav II-936 ´ zy´ Snie˙ nski, Bartlomiej II-864 Soberon, Xavier II-370 Sohn, Bong-Soo I-350 Sohn, Won-Sung III-477 Soltan, Mehdi IV-490 Song, Hanna II-114 Song, Huimin III-457 Song, Hyoung-Kyu IV-344, IV-562 Song, Jeong Young IV-614 Song, Jae-Won III-902 Song, Joo-Seok II-665 Song, Sun-Hee II-150 Song, Wang-Cheol IV-925 Song, Xinmin III-1062 Song, Zhanjie II-1029, II-1075 Sorge, Volker I-1098 Souza, Jano M. de III-253 Spataro, William I-866 Spiegel, Michael I-1238 Sreepathi, Sarat I-1058 Srinivasan, Ashok I-603 Srovnal, Vil´em II-936 Stafford, R.J. I-972 Stauffer, Beth I-995 Steder, Michael I-931 Sterna, Kamil I-390 Stransky, S. I-1155 Strug, Barbara II-880 Su, Benyue II-41 Su, Fanjun IV-773 Su, Hui-Kai IV-797 Su, Hung-Chi I-286 Su, Liang III-742 Su, Sen IV-785 Su, Zhixun II-33 Subramaniam, S. I-446

1269

1270

Author Index

Succi, Sauro I-786 Sugiyama, Toru I-914 Suh, W. I-1050 Sui, Yangyi III-579 Sukhatme, Gaurav I-995 Sulaiman, J. I-326 Sun, De’an III-1138 Sun, Feixian II-355 Sun, Guangzhong I-700 Sun, Guoqiang IV-773 Sun, Haibin II-531 Sun, Jin II-1131 Sun, Jun I-278, I-294 Sun, Lijun IV-218 Sun, Miao II-319 Sun, Ping III-220 Sun, Shaorong IV-134 Sun, Shuyu I-755, I-890 Sun, Tianze III-579 Sun, Xiaodong IV-134 ˇ Suvakov, Milovan II-641 Swain, E. I-1074 Swaminathan, J. II-288 Szab´ o, G´ abor I-1090 Szczepaniak, Piotr II-219 Szczerba, Dominik I-906 Sz´ekely, G´ abor I-906 Tabik, Siham I-54 Tackley, Paul III-1122 Tadi´c, Bosiljka II-633, II-641 Tadokoro, Yuuki II-178 Tahar, Sofi`ene II-263 Tak, Sungwoo IV-570 Takahashi, Isao I-406 Takato, Setsuo II-178 Takeda, Kenji III-273 Tan, Guoxin III-587 Tan, Hui I-418 Tan, Jieqing II-41 Tan, Yu-An III-567 Tan, Zhongfu III-984 Tang, Fangcheng IV-170 Tang, J.M. I-874 Tang, Jiong I-1197 Tang, Liqun III-1210 Tang, Sheng Qun II-681, II-736 Tang, Xijin IV-35, IV-150 Tang, Yongning IV-857 Tao, Chen III-953

Tao, Jianhua I-168 Tao, Jie II-831 Tao, Yongcai I-434 Tao, Zhiwei II-657 Tay, Joc Cing I-119 Terpstra, Frank III-216 Teshnehlab, Mohammad II-960 Theodoropoulos, Georgios I-1098 Thijsse, Barend J. I-842 Thrall, Stacy I-237 Thurner, Stefan II-625 Tian, Chunhua III-1032, IV-129 Tian, Fengzhan III-645 Tian, Yang III-611 Tian, Ying-Jie III-669, III-693, III-882 Ting, Sun III-129 Tiyyagura, Sunil R. I-128 Tobis, Michael I-931 Tokinaga, Shozo IV-162 Toma, Ghiocel II-1045 Tong, Hengqing III-162 Tong, Qiaohui III-162 Tong, Weiqin III-42 Tong, Xiao-nian IV-1147 Top, P. I-1074 Trafalis, Theodore B. I-176, I-1130 Treur, Jan II-888 Trinder, Phil II-617 Trunfio, Giuseppe A. I-567, I-866 Tsai, Wu-Hong II-673 Tseng, Ming-Te IV-275 Tsoukalas, Lefteri H. I-1074, I-1083 Tucker, Don I-86 Turck, Filip De I-454 Turovets, Sergei I-86 Uchida, Makoto II-649 U˘ gur, Aybars II-158 ¨ Ulker, Erkan II-49 Unold, Olgierd II-1210 Urbina, R.T. II-194 Uribe, Roberto I-229 Urmetzer, Florian II-792 Vaidya, Binod IV-717 Valuev, I. III-199 Vanrolleghem, Peter A. I-454 Vasenkov, Alex I-858 Vasyunin, Dmitry III-191 Veh´ı, Josep II-166

Author Index Veloso, Renˆe Rodrigues III-281 Venkatasubramanian, Venkat I-963 Venuvanalingam, P. II-288 Vermolen, F.J. I-70 V´ıas, Jes´ us M. I-54 Vidal, Antonio M. I-152 Viswanathan, M. III-701 Vivacqua, Adriana S. III-253 Vodacek, Anthony I-1042 Volkert, Jens II-752, II-776 Vuik, C. I-874 Vumar, Elkin III-370 Waanders, Bart van Bloemen I-1010 Wagner, Fr´ed´eric II-593 W¨ ahlisch, Matthias IV-761 Walenty´ nski, Ryszard II-219 Wan, Wei II-538, II-569 Wang, Aibao IV-825 Wang, Bin III-381, III-384 Wang, Chao I-192 Wang, Chuanxu IV-186 Wang, Daojun III-516 Wang, Dejun II-1107 Wang, Haibo IV-194 Wang, Hanpin III-257 Wang, Honggang III-859 Wang, Hong Moon IV-578 Wang, Huanchen IV-51 Wang, Huiqiang III-117, III-141, III-1077 Wang, J.H. III-1164, III-1228 Wang, Jiabing I-497 Wang, Jian III-1077 Wang, Jian-Ming III-1114 Wang, Jiang-qing IV-1045, IV-1147 Wang, Jianmin I-192 Wang, Jianqin II-569 Wang, Jihui III-448 Wang, Jilong IV-765 Wang, Jing III-685 Wang, Jinping I-102 Wang, Jue III-964 Wang, Jun I-462, I-466, II-974 Wang, Junlin III-1214 Wang, Liqiang III-244 Wang, Meng-dong III-1008 Wang, Naidong III-1146 Wang, Ping III-389 Wang, Pu I-1090

1271

Wang, Qingquan IV-178 Wang, Shengqian II-1037 Wang, Shouyang II-423, II-455, II-486, III-925, III-933, III-964, IV-106 Wang, Shuliang II-657 Wang, Shuo M. III-1205 Wang, Shuping III-972 Wang, Tianyou III-34 Wang, Wei I-632 Wang, Weinong IV-644 Wang, Weiwu IV-997, IV-1179 Wang, Wenqia I-490 Wang, Wu III-174 Wang, Xianghui III-105 Wang, Xiaojie II-1178 Wang, Xiaojing II-363 Wang, Xin I-1197 Wang, Xing-wei I-575 Wang, Xiuhong III-98 Wang, Xun III-591 Wang, Ya III-153 Wang, Yi I-1230 Wang, Ying II-538, II-569 Wang, You III-1101 Wang, Youmei III-501 Wang, Yun IV-138 Wang, Yuping IV-1159 Wang, Yunfeng III-762 Wang, Zheng IV-35, IV-218 Wang, Zhengning II-57 Wang, Zhengxuan II-97 Wang, Zhiying IV-251 Wang, Zuo III-567 Wangc, Kangjian I-482 Warfield, Simon K. I-980 Wasynczuk, O. I-1074 Wei, Anne. IV-506 Wei, Guozhi. IV-506 Wei, Lijun II-482 Wei, Liu II-146 Wei, Liwei II-431 Wei, Wei II-538 Wei, Wu II-363 Wei, Yi-ming III-1004 Wei, Zhang III-611 Weihrauch, Christian I-747 Weimin, Xue III-551 Weissman, Jon B. I-1222 Wen, Shi-qing III-1172 Wendel, Patrick III-204

1272

Author Index

Wenhong, Xia III-551 Whalen, Stephen I-980 Whangbo, T.K. III-701 Wheeler, Mary F. I-1213 Wibisono, Adianto III-191 Widya, Ing III-424 Wilhelm, Alexander II-752 Willcox, Karen I-1010 Winter, Victor III-790 Wojdyla, Marek II-558 Wong, A. I-1155 Woods, John I-111 Wu, Cheng-Shong IV-797 Wu, Chuansheng IV-1116 Wu, Chunxue IV-773 Wu, Guowei III-419 Wu, Hongfa IV-114 Wu, Jiankun II-1107 Wu, Jian-Liang III-320, III-389, III-457 Wu, Jianpin IV-801 Wu, Jianping IV-817, IV-833 Wu, Kai-ya III-980 Wu, Lizeng II-978 Wu, Qiuxin III-397 Wu, Quan-Yuan I-462, I-466, II-974, III-121 Wu, Ronghui III-109, III-571 Wu, Tingzeng III-397 Wu, Xiaodan III-762 Wu, Xu-Bo III-567 Wu, Yan III-790 Wu, Zhendong IV-376 Wu, Zheng-Hong III-493 Wu, Zhijian IV-1131 Wyborn, D. III-1156 Xex´eo, Geraldo III-253 Xi, Lixia IV-1091 Xia, Jingbo III-133 Xia, L. II-1083 Xia, Na IV-1139 Xia, Xuewen IV-1124 Xia, ZhengYou IV-90 Xian, Jun I-102 Xiang, Li III-1138 Xiang, Pan II-25 Xiao, Hong III-113 Xiao, Ru Liang II-681, II-736 Xiao, Wenjun IV-67 Xiao, Zhao-ran III-1214

Xiao-qun, Liu I-546 Xiaohong, Pan IV-957 Xie, Lizhong IV-801 Xie, Xuetong III-653 Xie, Yi I-640 Xie, Yuzhen II-268 Xin-sheng, Liu III-453 Xing, Hui Lin III-1093, III-1146, III-1151, III-1156, III-1205 Xing, Wei II-712 Xing, Weiyan IV-961 Xiong, Liming III-329, III-397 Xiong, Shengwu IV-1155 Xiuhua, Ji II-130 Xu, B. III-1228 Xu, Chao III-1197 Xu, Chen III-571 Xu, Cheng III-109 Xu, H.H. III-1156 Xu, Hao III-289 Xu, Hua II-956 Xu, Jingdong IV-877 Xu, Kaihua III-153 Xu, Ke IV-506 Xu, Ning IV-1155 Xu, Wei III-964 Xu, Wenbo I-278, I-294 Xu, X.P. III-988 Xu, Xiaoshuang III-575 Xu, Y. I-1074 Xu, Yang II-736 Xu, Yaquan IV-194 Xu, You Wei II-736 Xu, Zhaomin IV-725 Xu, Zhenli IV-267 Xue, Gang III-273 Xue, Jinyun III-469 Xue, Lianqing IV-841 Xue, Wei I-529 Xue, Yong II-538, II-569 Yamamoto, Haruyuki III-1146 Yamashita, Satoshi II-178 Yan, Hongbin IV-1 Yan, Jia III-121 Yan, Nian III-806 Yan, Ping I-1090 Yan, Shi IV-522 Yang, Bo III-1012 Yang, Chen III-603

Author Index Yang, Chuangxin I-497 Yang, Chunxia IV-114 Yang, Deyun II-1021, II-1029, II-1075 Yang, Fang I-221 Yang, Fangchun IV-785 Yang, Hongxiang II-1029 Yang, Jack Xiao-Dong I-834 Yang, Jianmei IV-82 Yang, Jihong Ou I-160 Yang, Jincai IV-1175 Yang, Jong S. III-432 Yang, Jun I-988, II-57 Yang, Kyoung Mi III-559 Yang, Lancang III-615 Yang, Seokyong IV-636 Yang, Shouyuan II-1037 Yang, Shuqiang III-717 Yang, Weijun III-563 Yang, Wu III-611 Yang, Xiao-Yuan III-677 Yang, Xuejun I-474, IV-921 Yang, Y.K. III-701 Yang, Young-Kyu IV-660 Yang, Zhenfeng III-212 Yang, Zhongzhen III-1000 Yang, Zong-kai III-587, IV-873 Yao, Kai III-419, III-461 Yao, Lin III-461 Yao, Nianmin III-50, III-66, III-157 Yao, Wenbin III-50, III-157 Yao, Yangping III-1146 Yazici, Ali II-186 Ye, Bin I-278 Ye, Dong III-353 Ye, Liang III-539 Ye, Mingjiang IV-833 Ye, Mujing I-1066 Yen, Jerome I-554 Yeo, Sang-Soo IV-693 Yeo, So-Young IV-344 Yeom, Heon Y. II-577, III-26, IV-705 Yi, Chang III-1069 Yi, Huizhan IV-921 Yi-jun, Chen IV-741 Yi, Sangho IV-905 Yim, Jaegeol IV-652 Yim, Soon-Bin IV-457 Yin, Jianwei II-1115 Yin, Peipei I-192 Yin, Qingbo III-10, III-149

1273

Ying, Weiqin IV-997, IV-1061, IV-1124, IV-1179 Yongqian, Lu III-166 Yongtian, Yang III-129 Yoo, Gi-Hyoung III-894 Yoo, Jae-Soo II-154, III-813 Yoo, Kwan-Hee II-154 Yoon, Ae-sun II-1170 Yoon, Jungwon II-760 Yoon, KyungHyun II-114, II-162 Yoon, Seokho IV-360 Yoon, Seok Min IV-578 Yoon, Won Jin IV-550 Yoshida, Taketoshi IV-150 You, Jae-Hyun II-515 You, Kang Soo III-894 You, Mingyu I-168 You, Xiaoming IV-1068 You, Young-Hwan IV-344 Youn, Hee Yong IV-566 Yu, Baimin III-937 Yu, Beihai III-960 Yu, Chunhua IV-1139 Yu, Fei III-109, III-555, III-571 Yu, Jeffrey Xu I-270 Yu, Lean II-423, II-486, II-494, III-925, III-933, III-937, IV-106 Yu, Li IV-1175 Yu, Shao-Ming IV-227, IV-259 Yu, Shun-Zheng I-640 Yu, Weidong III-98 Yu, Xiaomei I-810 Yu-xing, Peng IV-393 Yu, Zhengtao IV-598 Yuan, Jinsha II-978, III-531 Yuan, Soe-Tsyr IV-433 Yuan, Xu-chuan IV-158 Yuan, Zhijian III-717 Yuanjun, He II-146 Yue, Dianmin III-762 Yue, Guangxue III-109, III-555, III-571 Yue, Wuyi IV-210, IV-352 Yue, Xin II-280 Yuen, Dave A. I-62 Yuen, David A. III-1205 Zabelok, S.A. I-850 Zain, Abdallah Al II-617 Zain, S.M. II-335 Zaki, Mohamed H. II-263

1274

Author Index

Zambreno, Joseph III-734 Zand, Mansour III-790 Zapata, Emilio L. I-54 Zechman, Emily I-1058 Zeleznikow, John I-270 Zeng, Jinquan II-355, IV-166 Zeng, Qingcheng III-1000 Zeng, Z.-M. IV-777 Zha, Hongyuan I-334 Zhan, Mingquan III-377 Zhang, Bin I-286, I-995 Zhang, CaiMing II-17 Zhang, Chong II-327 Zhang, Chunyuan IV-961 Zhang, Defu II-482 Zhang, Dong-Mei III-1114 Zhang, Fangfeng IV-59 Zhang, Gexiang IV-243, IV-989 Zhang, Guang-Zheng I-78 Zhang, Guangsheng III-220 Zhang, Guangzhao IV-825 Zhang, Guoyin III-105 Zhang, H.R. III-1223 Zhang, J. III-1093 Zhang, Jing IV-765 Zhang, Jingping II-319, II-331 Zhang, Jinlong III-960 Zhang, Juliang II-499 Zhang, Keliang II-409 Zhang, L.L. III-1164 Zhang, Li III-599 Zhang, Li-fan I-562 Zhang, Lihui III-563 Zhang, Lin I-1026 Zhang, Lingling III-906 Zhang, Lingxian III-547 Zhang, Miao IV-833 Zhang, Min-Qing III-677 Zhang, Minghua III-58 Zhang, Nan IV-35 Zhang, Nevin L. IV-26 Zhang, Peng II-499 Zhang, Pengzhu IV-174 Zhang, Qi IV-1139 Zhang, Ru-Bo III-125, III-174 Zhang, Shenggui III-338 Zhang, Shensheng III-58 Zhang, Sumei III-448 Zhang, Weifeng II-1147 Zhang, Wen III-964, IV-150

Zhang, Xi I-1213 Zhang, Xia III-362 Zhang, XianChao III-293, III-661 Zhang, Xiangfeng III-74 Zhang, Xiaoguang IV-1091 Zhang, Xiaoping III-645 Zhang, Xiaoshuan III-547 Zhang, Xuan IV-701, IV-853 Zhang, Xueqin III-615 Zhang, Xun III-933, III-964, III-1032 Zhang, Y. I-972 Zhang, Y.M. III-1223 Zhang, Yafei I-196 Zhang, Yan I-632 Zhang, Ying I-474 Zhang, Yingchao IV-114 Zhang, Yingzhou II-1147 Zhang, Zhan III-693 Zhang, Zhen-chuan I-575 Zhang, Zhiwang II-490 Zhangcan, Huang IV-1005 Zhao, Chun-feng III-1197 Zhao, Guosheng III-1077 Zhao, Hui IV-166 Zhao, Jidi IV-51 Zhao, Jie III-984 Zhao, Jijun II-280 Zhao, Jinlou III-911 Zhao, Kun III-882 Zhao, Liang III-583 Zhao, Ming I-964 Zhao, Ming-hua III-1101 Zhao, Qi IV-877 Zhao, Qian III-972 Zhao, Qiang IV-1021 Zhao, Qingguo IV-853 Zhao, Ruiming III-599 Zhao, Wen III-257 Zhao, Wentao III-42 Zhao, Xiuli III-66 Zhao, Yan II-689 Zhao, Yaolong II-550 Zhao, Yongxiang IV-1155 Zhao, Zhiming III-191, III-216 Zhao, Zun-lian III-1012 Zheng, Bojin IV-1029, IV-1037, IV-1171, IV-1179 Zheng, Di I-462, I-466, II-974 Zheng, Jiping II-1131 Zheng, Lei II-538, II-569

Author Index Zheng, Rao IV-138 Zheng, Ruijuan III-117 Zheng, SiYuan II-363 Zheng, Yao I-318, I-482 Zheng, Yujun III-469 Zhengfang, Li IV-283 Zhiheng, Zhou IV-283 Zhong, Shaobo II-569 Zhong-fu, Zhang III-453 Zhou, Bo I-196 Zhou, Deyu II-378 Zhou, Jieping III-516 Zhou, Ligang II-494 Zhou, Lin III-685 Zhou, Peiling IV-114 Zhou, Wen II-689 Zhou, Xiaojie II-33 Zhou, Xin I-826 Zhou, Zongfang III-1062

Zhu, Aiqing III-555 Zhu, Changqian II-57 Zhu, Chongguang III-898 Zhu, Egui III-575 Zhu, Jianhua II-1075 Zhu, Jiaqi III-257 Zhu, Jing I-46 Zhu, Meihong II-401 Zhu, Qiuming III-844 Zhu, Weishen III-145 Zhu, Xilu IV-1183 Zhu, Xingquan III-685, III-781 Zhu, Yan II-1067 Zhuang, Dong IV-82 Zienkiewicz, O.C. III-1105 Zong, Yu III-661 Zou, Peng III-742 Zuo, Dong-hong IV-873 Zurita, Gustavo II-799

1275