Computational Science - ICCS 2007: 7th International Conference, Beijing China, May 27-30, 2007, Proceedings, Part II (Lecture Notes in Computer Science, 4488) 3540725857, 9783540725855

Part of a four-volume set, this book constitutes the refereed proceedings of the 7th International Conference on Computa

124 73 70MB

English Pages 1310 [1285] Year 2007

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Title page
Preface
Organization
Table of Contents
Resolving Occlusion Method of Virtual Object in Simulation Using Snake and Picking Algorithm
Introduction
Methodology
Formation of Wireframe Using DEM and Registration with Real Images Using Visual Clues
Extracting the Outline of Objects and Acquiring 3D Informa
Acquisition of 3D Information Using the Picking Algorithm
Creation of 3D Information Using Proportional Relational Expression
Creation of Virtual Target Path and Selection of Candidate Occlusion Objects Using MER (Minimum Enclosing Rectangle)
Experimental Results
Conclusions
References
Graphics Hardware-Based Level-Set Method for Interactive Segmentation and Visualization
Introduction
Level-Set Method on Graphics Hardware
Memory Manager
Level-Set Solver
Volume Renderer
Experimental Result
Conclusion
References
Parameterization of Quadrilateral Meshes
Introduction
Parameterization
Simplification Algorithm
Global Parameterization
Local Parameterization
Mesh Optimization
Examples
Conclusions
References
Pose Insensitive 3D Retrieval by Poisson Shape Histogram
Introduction
Related Work
Poisson Equation
Poisson Shape Histogram and Matching
Experiment
Conclusion and Future Work
References
Point-Sampled Surface Simulation Based on Mass-Spring System
Introduction
Related Work
Simplification of Point-Sampled Surface
Simulation Based on Mass-Spring System
Structure of the Springs
Estimation of the Mass
Forces
Simulation
Deformation of the Original Point-Sampled Surface
Experimental Results
Conclusion
Sweeping Surface Generated by a Class of Generalized Quasi-cubic Interpolation Spline
Introduction
$C^2$-Continuous Generalized Quasi-cubic Interpolation Spline
Sweep Surface Modeling
The Construction of Spine and Cross-Section Curves
The Moving Frame
The Modeling Examples
Conclusions and Discussions
An Artificial Immune System Approach for B-Spline Surface Approximation Problem
Introduction
B-Spline Surface Approximation
B-Spline Surface Approximation by Artificial Immune System
Experimental Results
Conclusion and Future Work
References
Implicit Surface Reconstruction from Scattered Point Data with Noise
Introduction
Filtering
Covariance Analysis
Mean Shift Filtering
Implicit Surface Reconstruction
Adaptive Space Subdivision
Estimating Local Shape Functions
Partition of Unity
Applications and Results
Conclusion and Future Work
References
The Shannon Entropy-Based Node Placement for Enrichment and Simplification of Meshes
Introduction
Vertex Placement Algorithm
Experimental Results
Concluding Remarks
Parameterization of 3D Surface Patches by Straightest Distances
Introduction
Related Work
Our Parameterization by Straightest Distances
Our Local Straightest Distances
Discussion
Measured Boundary for Parameterization
Results
Conclusion and Future Work
Facial Expression Recognition Based on Emotion Dimensions on Manifold Learning
Introduction
Database on Dimensions of Emotion
Facial Expression Representation from Manifold Learning
Preprocessing
Locally Linear Embedding Representation
Result and Discussion
References
AI Framework for Decision Modeling in Behavioral Animation of Virtual Avatars
Introduction
Behavioral System
Environment Recognition
Knowledge Acquisition
Decision Modeling
Action Planning and Execution
Two Illustrative Examples
Conclusions and Future Work
Studies on Shape Feature Combination and Efficient Categorization of 3D Models
Introduction
An Automatic Decision Method of Features’ Fixed-Weight
Efficient Categorization of 3D Models Based on Clustering Result
Experiment and Analysis
Conclusions
References
A Generalised-Mutual-Information-Based Oracle for Hierarchical Radiosity
Introduction
Preliminaries
Radiosity
HCT Entropy and Generalised Mutual Information
Generalised Mutual Information-Based Oracle
Results
Conclusions
Rendering Technique for Colored Paper Mosaic
Introduction
Related Work
Preprocessing
Data Structure of Colored Paper
Image Segmentation
The Generation of Colored Paper Tile
Determination of Size and Color
Determination of Shape
The Arrangement of Colored Paper Tile
Results
Discussion and Future Work
Real-Time Simulation of Surface Gravity Ocean Waves Based on the TMA Spectrum
Introduction
The Ocean Wave Model
The Implementation Model
Implementation Results
Conclusion
Determining Knots with Quadratic Polynomial Precision
Introduction
Basic Idea
Determining Knots
Constructing a Quadratic Polynomial with Four Points
Determining Knots
Experiments
Conclusions and Future Works
Interactive Cartoon Rendering and Sketching of Clouds and Smoke
Introduction
Previous Work
Scene Modeling
Rendering
Details Layer for Clouds
Smoke Simulation
Results
Conclusions and Future Work
References
Spherical Binary Images Matching
Introduction
Our Method
Conclusion and Further Work
References
Dynamic Data Path Prediction in Network Virtual Environment
Introduction
Dynamic Data Path Prediction
Path Prediction Using Dead-Reckoning Algorithm
Conclusion
References
Modeling Inlay/Onlay Prostheses with Mesh Deformation Techniques
Introduction
Modeling the Outer Surfaces of Inlay/Onlays
Experiments and Future Works
References
Automatic Generation of Virtual Computer Rooms on the Internet Using X3D
Introduction
Extensible 3D (X3D)
Automatic Computer Room Generation Tool
Conclusion
Stained Glass Rendering with Smooth Tile Boundary
Introduction
Related Work
Glass Tile Generation for the Stained Glass
Region Generation of Glass Tiles
Interpolation of the Region Boundaries
Re-segmentation of the Region
Determination of the Colors for Each Region
Conclusion
Guaranteed Adaptive Antialiasing Using Interval Arithmetic
Introduction
Interval Adaptive Antialiasing ($IAA$)
Experimentation and Results
Conclusions
Restricted Non-cooperative Games
Introduction
From Strategy Topologies and Payoff Arrays to Game Networks
Scoring Game Networks
A New Kind of Science Approach to Game Networks
Conclusion
A New Application of CAS to LATEX Plottings
Introduction
Requirements for CAS-Aided LaTeX Plottings
A Successful Example: $KETpic for Maple$
Maple Satisfies the Requirements
Special Functions or Functions Defined by Integrals
Curves Defined by Implicit Functions or with Parameters
Conclusions and Future Work
JMathNorm: A Database Normalization Tool Using Mathematica
Introduction
A Discussion on Normalization Algorithms
Mathematica Implementation
BCNF with Mathematica
JMathNorm User Interface
Tests and Discussions
Symbolic Manipulation of Bspline Basis Functions with Mathematica
Introduction
Mathematical Preliminaries
Symbolic Computation of Bspline Basis Functions
References
Rotating Capacitor and a Transient Electric Network
Introduction and Motivation
Analysis
Modes of Mechanical Rotations
Electrical Networks
Characteristics of Charging and Discharging a DC Driven RC Circuit with Time-Dependent Uniformly Rotating Plates
Characteristics of Charging and Discharging DC Driven RC Circuit with Time-Dependent Accelerated Rotating Plates
Characteristics of Charging and Discharging an AC Driven RC Circuit with Time-Dependent Capacitor
Conclusions
References
Numerical-Symbolic Matlab Program for the Analysis of Three-Dimensional Chaotic Systems
Introduction
Program Architecture and Implementation
Some Illustrative Examples
Visualization of Chaotic Attractors
Symbolic-Numerical Analysis of Chaotic Systems
Conclusions and Further Remarks
References
Safety of Recreational Water Slides: Numerical Estimation of the Trajectory, Velocities and Accelerations of Motion of the Users
Introduction
Typical Geometry of Water Slides
Model of a Sliding Person
Equations of Motion
Sample Results
Conclusions
References
Computing Locus Equations for Standard Dynamic Geometry Environments
Introduction
Numerical vs. Symbolic Loci
User Interface and Architecture
Examples
An Ellipse
Limaçon of Pascal
A Simple Locus, Different Answers
Extending the Scope of Loci Computations
Conclusion
References
Symbolic Computation of Petri Nets
Introduction
Basic Concepts and Definitions
The $Mathematica$ Package for Petri Nets
Conclusions and Further Remarks
References
Dynaput: Dynamic Input Manipulations for 2D Structures of Mathematical Expressions
Introduction
Input Interfaces of Computer Algebra Systems
The Present Work
GUI Operations
Drag and Drop
Drag and Draw
Dynaput Operation
Data Structure
Layout Tree
Binary Tree of Symbol Objects
Conclusion
References
On the Virtues of Generic Programming for Symbolic Computation
Introduction
Software Overview
AXIOM Polynomial Domain Constructors
Finite Field Arithmetic
Polynomial Arithmetic
Code Connection
Experimentation
Conclusion
References
Semi-analytical Approach for Analyzing Vibro-Impact Systems
Introduction
Implementation of Computer Algebra
Application of Abrasive Treatment Process Dynamics in the Investigation
Conclusions
References
Formal Verification of Analog and Mixed Signal Designs in Mathematica
Introduction
Implementation in Mathematica
First-Order $\DSM$ Modulator
Conclusions
References
Efficient Computations of Irredundant Triangular Decompositions with the RegularChains Library
Introduction
Inclusion Test of Quasi-components
Removing Redundant Components
References
Characterisation of the Surfactant Shell Stabilising Calcium Carbonate Dispersions in Overbased Detergent Additives: Molecular Modelling and Spin-Probe-ESR Studies
Introduction
Experimental Methods
Results
Discussion
References
Hydrogen Adsorption and Penetration of Cx (x=58-62) Fullerenes with Defects
Introduction
Computational Methods
Results and Discussion
References
Ab Initio and DFT Investigations of the Mechanistic Pathway of Singlet Bromocarbenes Insertion into C-H Bonds of Methane and Ethane
Introduction
Computational Details
Results and Discussion
Singlet Bromocarbenes Insertion into Methane and Ethane
Energetics
Transition State Geometries
NBO Analyses
IRC - Charge Analyses
Summary
References
Theoretical Gas Phase Study of the Gauche and Trans Conformers of 1-Bromo-2-Chloroethane and Solvent Effects
Introduction
Calculations
Results and Discussion
Conclusions
References
Dynamics Simulation of Conducting Polymer Interchain Interaction Effects on Polaron Transition
Introduction
Model
Simulation Results
Conclusions
References
Cerium (III) Complexes Modeling with Sparkle/PM3
Introduction
Parameterization Procedure
Results and Discussion
Conclusion
References
The Design of Blue Emitting Materials Based on Spirosilabifluorene Derivatives
Introduction
Computational Details
Results and Discussions
Comparison of Computational Methods and Optical Characteristics for 1a-1d
Optimized Geometries and Optical Properties for the “CH”/N Substituted Derivatives
Optical Property for Dimer
Conclusions
References
Regulative Effect of Water Molecules on the Switches of Guanine-Cytosine (GC) Watson-Crick Pair
Introduction
Results and Discussions
Hydration Schemes of GNH-C and Regulation Effect of Hydration
Conclusions
References
Energy Partitioning Analysis of the Chemical Bonds in$mer$-Mq3 (M = Al^III, Ga^III, In^III, Tl^III)
Introduction
Methodology
Results and Discussion
The Optimized Structure Analyses of S0 for mer-Mq3
The Frontier Molecular Orbitals (FMO) Analyses of S0 for mer-Mq3
The Energy Decomposition Analyses for mer-Mq3 in S0 States
Conclusions
References
Ab Initio Quantum Chemical Studies of Six-Center Bond Exchange Reactions Among Halogen and Halogen Halide Molecules
Introduction
Results and Discussion
Conclusion
References
Comparative Analysis of the Interaction Networks of HIV-1 and Human Proteins
Introduction
Methods
Results and Discussion
Conclusion
References
Protein Classification from Protein-Domain and Gene-Ontology Annotation Information Using Formal Concept Analysis
Introduction
Methods
Formal Concept Analysis
Tripartite Lattice: Interpenetrations Among Three Distinct Sets of Elements
Results
Bipartite Lattice
Tripartite Lattice
Discussion
References
A Supervised Classifier Based on Artificial Immune System
Introduction
Proposed Classifier
Initialization
Clone and Mutation of ARB
Competition for Resources
Consolidating and Controlling the Memory Cells Pool
Classification Process
Experiments
Conclusion
References
Ab-origin: An Improved Tool of Heavy Chain Rearrangement Analysis for Human Immunoglobulin
Introduction
Method
Germline Database
Principle
Algorithm
Validation
Results and Discussions
Conclusions
References
Analytically Tuned Simulated Annealing Applied to the Protein Folding Problem
Introduction
Analytical Tuning
Setting Initial and Final Temperatures
Setting the Markov Chain Length
Implementation
Results
Conclusions
References
Training the Hidden Vector State Model from Un-annotated Corpus
Introduction
The Hidden Vector State Model
Methodologies
k-Nearest-Neighbor Classification with Constrains
POS Sequence Alignment
Experiments
Choosing Proper k
Extraction Results
Conclusions and Future Work
References
Using Computer Simulation to Understand Mutation Accumulation Dynamics and Genetic Load
Introduction
The Program
Analysis
Conclusions
References
An Object Model Based Repository for Biological Pathways Using XML Database Technology
Introduction
Design Considerations
Implementation
Discussions
References
Protein Folding Simulation with New Move Set in 3D Lattice Model
Introduction
New Move Set
Algorithm
Experience Results
Conclusion
References
A Dynamic Committee Scheme on Multiple-Criteria Linear Programming Classification Method
Introduction
Background Knowledge
Two-Class MCLP Classification Model
Progressive Sampling
Classification Committee
A Dynamic Committee Scheme on MCLP Classification Method
Empirical Research
Experiments and Results
Analysis of Results
Conclusions and Future Efforts
References
Kimberlites Identification by Classification Methods
Introduction
Data Description and Data Preprocessing
Methods
SVM
Decision Tree
Cross-Validation
Experimental Results
Conclusion
A Fast Method for Pricing Early-Exercise Options with the FFT
Introduction
FFT-Based Methods for Option Pricing in Literature
The CONV Method
Implementation
Discrete CONV Formula
Computational Complexity and Convergence Rate
Numerical Results
Conclusions and Future Works
References
Neural-Network-Based Fuzzy Group Forecasting with Application to Foreign Exchange Rates Prediction
Introduction
Neural-Network-Based Fuzzy Group Forecasting Methodology
Experiments
An Illustrative Numerical Example
Three Foreign Exchange Rates Prediction Experiments
Conclusions
References
Credit Risk Evaluation Using Support Vector Machine with Mixture of Kernel
Introduction
Support Vector Machine with Mixture of Kernel
Experiment Analysis
Experiment Result
Comparison of Results of Different Credit Risk Evaluation Models
Conclusions
References:
Neuro-discriminate Model for the Forecasting of Changes of Companies Financial Standings on the Basis of Self-organizing Maps
Introduction
Related Work
Methodology
Results of Testing
Conclusions
References
A New Computing Method for Greeks Using Stochastic Sensitivity Analysis
Introduction
Malliavin Calculus
European Options
Delta
Vega
Gamma
Asian Options
Delta
Vega
Gamma
Monte Carlo Simulations of Asian Option
Conclusions
Application of Neural Networks for Foreign Exchange Rates Forecasting with Noise Reduction
Introduction
Nonlinear Noise Reduction
Simple Nonlinear Noise Reduction (SNL)
Locally Projective Nonlinear Noise Reduction (LP)
Experiments Design
Neural Network Models
Random Walk Model
Performance Measure
Data Preparation
Experiments Results
Conclusions
References
An Experiment with Fuzzy Sets in Data Mining
Introduction
Fuzzy Set Experiments in See5
Fuzzy Sets and Ordinal Classification Task
Conclusions
References
An Application of Component-Wise Iterative Optimization to Feed-Forward Neural Networks
Introduction
An Application of CIO to Feed-Forward Neural Networks
An Illustrative Example
References
ERM-POT Method for Quantifying Operational Risk for Chinese Commercial Banks
Introduction
The ERM-POT Method
Experiment
Data Set
Results and Analysis
Conclusions
References
Building Behavior Scoring Model Using Genetic Algorithm and Support Vector Machines
Introduction
GA+SVM Model for Behavior Scoring Problems
Experimental Results
Conclusions
References
An Intelligent CRM System for Identifying High-Risk Customers: An Ensemble Data Mining Approach
Introduction
The SVM-Based Ensemble Data Mining System for CRM
Experimental Analysis
Conclusions
References
The Characteristic Analysis of Web User Clusters Based on Frequent Browsing Patterns
Introduction
Related Work
Taxonomy of Web Mining
WUM
Data Selecting and Preprocessing
Frequent Patterns Discovery
FSP Discovery
Results of Experimentation
The Characteristics Analysis of Web User Clusters
Similarity Measures
Algorithm
Results of Experimentation
Comparison of the Results and Conclusions
References
A Two-Phase Model Based on SVM and Conjoint Analysis for Credit Scoring
Introduction
Two-Stage Model Based on SVM and Conjoint Analysis
Empirical Study
Conclusion
A New Multi-Criteria Quadratic-Programming Linear Classification Model for VIP E-Mail Analysis
Introduction
Formulation of Multi-Criteria Quadratic-Programming Linear Classification Model
VIP E-Mail Dataset
Empirical Study of Cross-Validation
Comparison of MQLC and Decision Tree
Conclusion
References
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
Introduction
Ordinary Kriging
Tapering Covariances
Our Approaches
Data Sets
Experiments
Conclusion
Development of an Efficient Conversion System for GML Documents
Introduction
Related Works
System Development
Performance Evaluation
Conclusion
References
Effective Spatial Characterization System Using Density-Based Clustering
Introduction
Spatial Characterization Method Using Density-Based Clustering
Proposed Spatial Characterization Method
Density-Based Spatial Clustering
Design and Implementation of Spatial Characterization System
Evaluation and Result
Conclusion
References
MTF Measurement Based on Interactive Live-Wire Edge Extraction
Introduction
Live-Wire Model
Weighted Map and Local Cost
Searching for an Optimal Path
MTF Measurement Based on Live-Wire
Conclusions
References
Research on Technologies of Spatial Configuration Information Retrieval
Introduction
Spatial Mapping Relationships
Topological Mapping Relationships
Direction Mapping Relationships
Conclusions
Modelbase System in Remote Sensing Information Analysis and Service Grid Node
Introduction
Remote Sensing Information Analysis and Service Grid Node
Modelbase System in Remote Sensing Information Analysis and Service Grid Node
Modelbase File System
Modelbase Management System
Conclusion and Future Work
References
Density Based Fuzzy Membership Functions in the Context of Geocomputation
Introduction
Problem Statement
Density Estimation Using a Self-Organizing Map
An Algorithm to Compute Fuzzy Membership Functions
Preliminary Results with Artificial Data
Conclusions and Future Work
References
A New Method to Model Neighborhood Interaction in Cellular Automata-Based Urban Geosimulation
Introduction
Modeling Neighborhood Interaction
Calibration of Neighborhood Interaction
Study Area and Data Set
Calibration of the Neighborhood Interaction
Simulation and Results
Concluding Remarks
References
Artificial Neural Networks Application to Calculate Parameter Values in the Magnetotelluric Method
Introduction
Data Processing
Neural Approach
Results
Concluding Remarks
Integrating Ajax into GIS Web Services for Performance Enhancement
Introduction
Background
OGC GIS Web Services
Ajax(Asynchronous JavaScript and XML)
Related Works
Integrated Ajax into GIS Web Services
Performance Evaluation Between Using Ajax and Using Non-Ajax in GIS Web Services
Evaluation Metrics and Method
Evaluation Environments and Test Data Design
Evaluation Results
Conclusion
References
Aerosol Optical Thickness Retrieval over Land from MODIS Data on Remote Sensing Information Service Grid Node
Introduction
SYNTAM Model
AOT Retrieval Services on RSIN
Remote Sensing Information Service Grid Node
Grid Implementation of AOT Retrieval
Experiments and Results
Conclusions
References
Universal Execution of Parallel Processes: Penetrating NATs over the Grid
Introduction
General Concepts
Problem Domain
Relaying
Connection Reversal
Design and Implementation
Overall Architecture of MPICH-GU
Penetrating NATs
Discussion
Experimental Results
Communication Performance
Workload Performance
Conclusion
Parallelization of C# Programs Through Annotations
Introduction
Prototype Framework
Experimental Results
Related Work
Conclusions
Fine Grain Distributed Implementation of a Dataflow Language with Provable Performances
Introduction
Model for Recursive Dataflow Computations
Distributed Implementation: DDS
Theoretical Analysis
Experiments
Conclusions
Efficient Parallel Tree Reductions on Distributed Memory Environments
Introduction
Preliminaries
Trees and Their Serialized Representation
Tree Homomorphism and Extended Distributivity
Parallel Computation over the Serialized Trees
Experiments and Discussion
Concluding Remarks
Efficient Implementation of Tree Accumulations on Distributed-Memory Parallel Computers
Introduction
Internal Representation of Binary Trees
Implementation and Cost Model of Tree Accumulations
Optimal Division of Binary Trees Based on Cost Model
Experiment Results
Conclusion
SymGrid-Par: Designing a Framework for Executing Computational Algebra Systems on Computational Grids
Introduction
The SymGrid-Par Middleware Design
GCA Design
CAG Design
The GCA Prototype
Preliminary GCA Prototype Performance Results
Related Work
Conclusions
Directed Network Representation of Discrete Dynamical Maps
Introduction
Method
Results
Logistic Map
Tent Map
Discussion
Dynamical Patterns in Scalefree Trees of Coupled 2D Chaotic Maps
Introduction
The Tree System Set-Up and the Coupled Dynamics
The Average Trajectory and the Fourier Transform Definition of Synchronization
The Cluster Synchronization
Conclusions and Outlook
Simulation of the Electron Tunneling Paths in Networks of Nano-particle Films
Introduction
Mapping of Nano-particle Films to Networks
Particle Deposition
Emergent Networks
Conduction
Tunneling Implementation
Conduction Paths and Nonlinear Currents
Conclusions
Classification of Networks Using Network Functions
Introduction
Glauber Dynamics on Networks with Arbitrary Initial Conditions
Numerical Studies
Network Models and Initial Conditions
Numerical Results and Typical Input-Output Relations
Classification of Networks
Application to Real-World Networks
Conclusion
Effective Algorithm for Detecting Community Structure in Complex Networks Based on GA and Clustering
Introduction
The Method of Extracting Communities Based on GA
Bi-partitioning Strategy
Dividing a Network into Two Using GA
Community Detection in Scale-Free Networks of Large Size
Experiments and Results
Computer-Generated Networks
Zachary’s Karate Club Network
UML Class Diagram
Other Real-World Networks
Conclusion
References
Mixed Key Management Using Hamming Distance for Mobile Ad-Hoc Networks
Introduction
Concepts
Secure Communication Channel
Secure Key Management
Preparation
Node Initialization
Secret Key Exchange
Secure Communication
Path Construction Algorithm Using Hamming Distance
Simulation Results for the Path Construction Algorithm
Conclusion
An Integrated Approach for QoS-Aware Multicast Tree Maintenance
Introduction
Background
Proposed Algorithm
Simulation Model and Results
Conclusions
A Categorial Context with Default Reasoning Approachto Heterogeneous Ontology Integration
Introduction
Categorial Context Definition
Our Formal Framework: CContext-SHIOQ(D^+)DL
Syncretizing Default Reasoning into CContext-SHOIQ(D^+)
Related Default Reasoning
Default Reasoning Approach to Categorial Context Functor
A Context-Aware Prototype System
Related Works and Conclusion
References
An Interval Lattice Model for Grid Resource Searching
Introduction
An Interval Formal Concept Analysis Model
The Interval Lattice Construction Principle
The Resource Searching Algorithm in Grid Environment
Experiments and Discussion
Conclusion
References
Topic Maps Matching Computation Based on Composite Matchers
Introduction
Related Work
Problem Definition
Topic Maps Data Model
Topic Maps Matching Process
Similarity Computation
Name Matching Operation
Internal Structure Matching Operation
External Structure Matching Operations
Association Matching Operation
Experiment
Conclusion
References
Social Mediation for Collective Intelligence in a Large Multi-agent Communities: A Case Study of AnnotGrid
Introduction
Constructing Social Network
Semantic Similarity by Ontology Mapping
Co-occurrence Patterns
Community-Based Social Mediation
Experimentation: A Case Study of $AnnotGrid$ Environment
Related Work
Concluding Remarks and Future Work
Metadata Management in S-OGSA
Introduction
Metadata Management Requirements
Metadata Management in S-OGSA
Semantic Binding Capabilities
Semantic Binding Lifetime
Notification of Semantic Binding Changes
Security over Semantic Bindings
Conclusions and Future Work
Access Control Model Based on RDB Security Policy for OWL Ontology
Introduction
Persistent OWL Storage Model in Relational Databases
OWL-DL Ontology Model
Permanent Storage Model
OWL Security Model
Definition of Access Control Model
Views for OWL Ontology
Experiment and Evaluation
Related Work
Conclusion
References
Semantic Fusion for Query Processing in Grid Environment
Introduction
Communication Mechanism with Semantic Grid
The Procedure of Semantic Fusion
Discussion and Conclusion
SOF: A Slight Ontology Framework Based on Meta-modeling for Change Management
Introduction
SOF: A Slight Ontology Framework Based on Meta-modeling
Ontologies Used to Model Services
The Profile Part
The Process Part
Approach
Changes and Consistency
Changes Management Process
The Partial Implementation of SOF
Related Works
Conclusions
References
Data Forest: A Collaborative Version
Introduction
Related Work
A Data Forest
Application Areas
Stock Market Data
Network Data
Marketing Data
Conclusion and Future Work
Net’O’Drom– An Example for the Development of Networked Immersive VR Applications
Introduction
Related Work
Conceptual Design
The Principle of Mutual Dependent Interaction
Technical Implementation of 3D Models
Game Architecture
Structuring the VE
Physics
Network
Graphical Effects
Conclusions and Future Work
Intelligent Assembly/Disassembly System with a Haptic Device for Aircraft Parts Maintenance
Introduction
Description of Aircraft Part Maintenance System
Aircraft Maintenance System Algorithm
Experiment and Result
Conclusion
References
Generic Control Interface for Networked Haptic Virtual Environments
Introduction
Software Architecture
Transport Protocol Used
The Client/Server Architecture
Application Development
Device Operations
Conclusion and Future Work
Physically-Based Interaction for Networked Virtual Environments
Introduction
Related Work
Framework Architecture
Physics Module
Network Communication
Example Setups
A Door in the VE
Natural Interaction
Concurrent Object Manipulation
Conclusions and Future Work
Middleware in Modern High Performance Computing System Architectures
Introduction
Modern HPC System Architectures
Modern HPC Middleware
Discussion
Conclusion
Usability Evaluation in Task Orientated Collaborative Environments
Introduction
Methods for Usability Evaluation and Testing
The Collaborative Challenge
Enabling Usability Evaluation in Collaborative Environments
Future Directions and Conclusion
References
Developing Motivating Collaborative Learning Through Participatory Simulations
Introduction
Related Work
Developing a Framework
Applications Implemented Using the Framework
Trust Building Simulation
Stock Market Simulation
Discussion and Future Work
References
A Novel Secure Interoperation System
Introduction
Related Works
Framework for ASITL
Trust Evaluation
Basic Definitions
Working Flow of Abnormal Judging
Trust-Level Calculating
An Example
Conclusions and Future Work
References
Scalability Analysis of the SPEC OpenMP Benchmarks on Large-Scale Shared Memory Multiprocessors
Introduction
The SPEC OpenMP Benchmarks
Scalability Analysis Methodology
Results
Related Work
Conclusion and Future Work
Analysis of Linux Scheduling with VAMPIR
Introduction
Analyzing Scheduling Events in the Linux Kernel
Tracing Scheduling Events
Using VAMPIR to Analyze Scheduling Events
OTF Converter
Example
Conclusion
References
An Interactive Graphical Environment for Code Optimization
Introduction
DECO Design and Implementation
Visualizing the Access Pattern
Conclusions
Memory Allocation Tracing with VampirTrace
Introduction
Impact of Memory Allocation on Application Performance
Memory Allocators
Related Work
Instrumentation of Allocation/De-Allocation Operations
The proc File System
The mallinfo Interface
Autonomous Recording
Measurement Overhead
Representation of Result Data
Novel Record Types
Generic Performance Counters
Application Example
Conclusion and Outlook
Automatic Memory Access Analysis with Periscope
Introduction
Related Work
Architecture
Monitoring Memory Accesses
Application Experiments
Summary
A Regressive Problem Solver That Uses Knowledgelet
Introduction
Knowledgelet
Structure of Knowledgelet
Regressive General Total Order Planner, RTOP
Example of Applying the RTOPKLT
Solution to the Problem When There Is a Partial Plan
Solution to the Problem When There Is No Partial Plan
References
Resource Management in a Multi-agent System by Means of Reinforcement Learning and Supervised Rule Learning
Introduction
Learning in Multi-agent Systems
System Description
Agents
Learning Agents
Implementation
Experimental Results
Conclusion and Further Research
Learning in Cooperating Agents Environment as a Method of Solving Transport Problems and Limiting the Effects of Crisis Situations
Introduction
State of the Art
Model
Environment
Agent-Dispatcher
Agent-Vehicles
Traffic Jams
Traffic Patterns
Experiments
Conclusions
References
Distributed Adaptive Design with Hierarchical Autonomous Graph Transformation Systems
Introduction
Graph Structures in the Design Process
Derivation Control Environment
Synchronization and Adaptation in Design
Conclusions
Integration of Biological, Psychological, and Social Aspects in Agent-Based Simulation of a Violent Psychopath
Introduction
Case Study: Violent Psychopath
The Integrated Simulation Model
An Example Simulation Trace
Discussion
References
A Rich Servants Service Model for Pervasive Computing
Introduction
Current Researches of Pervasive Computing
RSS Model
Prototype Based on RSS Model
Conclusions
References
Techniques for Maintaining Population Diversity in Classical and Agent-Based Multi-objective Evolutionary Algorithms
Introduction
Previous Research on Maintaining Population Diversity in Evolutionary Multi-objective Algorithms
Introducing Flock-Based Operators into Evolutionary Multi-agent System
Sexual Selection as a Technique for Maintaining Population Diversity in CoEMAS for Multi-objective Optimization
Conclusions
Agents Based Hierarchical Parallelization of Complex Algorithms on the Example of hp Finite Element Method
Introduction
Hexahedral 3D hp Finite Element
Computational Problems
Load Balancing Problem
Agents Based Hierarchical Parallelization
Conclusions and Future Work
Sexual Selection Mechanism for Agent-Based Evolutionary Computation
Introduction
Previous Research on Sexual Selection as a Speciation Mechanism
Sexual Selection Mechanism for Co-evolutionary Multi-agent System
Experimental Results
Summary and Conclusions
Agent-Based Evolutionary and Immunological Optimization
Introduction
Evolutionary Multi-agent Systems
Artificial Immune Systems
Immunological Selection in EMAS
Configuration of the Examined Systems
Experimental Results
Conclusion
Strategy Description for Mobile Embedded Control Systems Exploiting the Multi-agent Technology
Introduction
Technical Implementation
Game Field Description
Game Strategy
Basic Desciption of Strategy Selection Process
Future Research
Conclusion
References
Agent-Based Modeling of Supply Chains in Critical Situations
Introduction
Critical Situations in MAS
Structure of Agent-Based Management of Crises
Agent-Based Management of Supply Chains
Summary
References
Web-Based Integrated Service Discovery Using Agent Platform for Pervasive Computing Environments
Introduction
Proposed Mechanism and System Architecture
Integrated Service Discovery Mechanism
System Architecture
Conclusion
References
A Novel Modeling Method for Cooperative Multi-robot Systems Using Fuzzy Timed Agent Based Petri Nets
Introduction
Basic Concepts
FTOPN
Agent Object and FTAPN
A Modeling Example
A CMRS Model
Conclusions
References
Performance Evaluation of Fuzzy Ant Based Routing Method for Connectionless Networks
Introduction
Routing Algorithms
Ant Based Routing
Fuzzy Ant Based Routing Method
Simulation and Results
Conclusions
References
Service Agent-Based Resource Management Using Virtualization for Computational Grid
Introduction
Service Agent-Based Resource Management Model
Experiments and Performance Evaluation
Conclusion
References
Fuzzy-Aided Syntactic Scene Analysis
Introduction
Fuzzy $IE$ Graphs for Fuzzy Patterns Representation
Parallel Parsing of Fuzzy $IE$ Graphs
Concluding Remarks
Agent Based Load Balancing Middleware for Service-Oriented Applications
Introduction
Architecture of the Load Balancing Middleware
Conclusions
References
A Transformer Condition Assessment System Based on Data Warehouse and Data Mining
Introduction
A New Transformer Condition Assessment System
Data Collection Subsystem
Condition Analysis Subsystem
Applications
Conclusions
References
Shannon Wavelet Analysis
Introduction
Shannon Wavelets
Reconstruction of a Function by Shannon Wavelets
Wavelet Representation of Operators
Wavelet Analysis of Bifurcation in a Competition Model
Introduction
Short Haar Wavelet Transform
System of Competition
Critical Analysis
Evolution of a Spherical Universe in a Short Range Collapse/Generation Interval
Introduction
Evolution Equations
Exact Solutions of Evolution Equations in Three Different Cases
Study of the Behaviour of the Universe in Proximity of the Collapse/Generation Times by an Expansion in Fractional Power Series
Initial Principal Curvature 1 Positive
Initial Principal Curvature 1 Negative
On the Differentiable Structure of Meyer Wavelets
Introduction
Meyer's Wavelets
Some Properties of the Characteristic Function
First Order Meyer Wavelet
Towards Describing Multi-fractality of Traffic Using Local Hurst Function
Introduction
Multi-fractality of Real Traffic
Discussions
Conclusions
References
A Further Characterization on the Sampling Theorem for Wavelet Subspaces
Introduction and Main Results
Proof of Main Results
Characterization on Irregular Tight Wavelet Frames with Matrix Dilations
Characterization on Irregular Tight Wavelet Frames with Matrix-Dilations
Proof of Main Results
Remarks
Feature Extraction of Seal Imprint Based on the Double-Density Dual-Tree DWT
Introduction
The Double-Density Dual-Tree DWT Theory
The Dual-Tree CWT
The Double-Density Dual-Tree DWT
The Projections of High-Frequency Subbands
Experiment Result
Conclusion
References
Vanishing Waves on Semi-closed Space Intervals and Applications in Mathematical Physics
Introduction
Test-Functions for Semi-closed Space Intervals
Aspects Connected with Spherical Waves
Applications at Relativistic Transformation of Waves
Conclusions
Modelling Short Range Alternating Transitions by Alternating Practical Test Functions
Introduction
Connections with Test Functions
Conclusions
Different Structural Patterns Created by Short Range Variations of Internal Parameters
Introduction
Equations Able to Generate Periodical Patterns
Periodical Patterns of Spatial Structures Descibed by Practical Test-Functions
Connection with the Ergodic Hypothesis
Conclusions
Dynamic Error of Heat Measurement in Transient
Introduction
Principle of Dynamic Heat Meter
Dynamic Heat Measurement Error of Heat Meter in Transient
Errors in Same Initial Flow Rate with Different Step Change
Errors in Same Step Change with Different Initial Flow Rate $G$_0
Conclusion
References
Truncation Error Estimate on Random Signals by Local Average
Introduction and the Main Result
Proof of the Main Result
A Numerical Solutions Based on the Quasi-wavelet Analysis
Introduction
Quasi-wavelet Solutions of Approximate Equations for Waves in Shallow Water
Spatial Coordinate Discrete to the Long Waves in Shallow Water
Numerical Discrete Forms of Quasi-wavelet Spatial Coordinate Discrete to the Long Waves in Shallow Water
Temporal Derivatives Discretization
Overall Solutions Scheme
Comparison Computations
Conclusion
References
Plant Simulation Based on Fusion of L-System and IFS
Introduction
Characteristics of L-System and IFS
Plant Simulation Method Based on Fusion of L-System and IFS
Improved L-System Implementation Algorithm
Results
Conclusion
References
A System Behavior Analysis Technique with Visualization of a Customer’s Domain
Introduction
Process of the System Behavior Analysis Technique
The Process of the Scenario-Based Visual Analysis
Artifacts of the System Behavior Analysis Technique
The Procedure of the System Behavior Analysis Technique
Application
Use Case Analysis
System Behavior Analysis
Concluding Remarks
Research on Dynamic Updating of Grid Service
Introduction
Architecture Supporting Grid Service Updating
System Architecture
Analysis of Proxy Service Mechanism
Version Management
Subscribe/Publish Model in Updating Procedure
Scheduling of Service Management
States Management
Updating Transactions of Grid-Based System
Prototype and Analysis
Summary and Future Work
References
Software Product Line Oriented Feature Map
Introduction
Feature Map
Deficiency of Feature Model
Definition of Feature Map
Meta-model of Feature Map
Case Study
Conclusion
References
Design and Development of Software Configuration Management Tool to Support Process Performance Monitoring and Analysis
Introduction
Current Configuration Management Tools
Development of Configuration Management Process Tool
Change Control Process
User Profiles and Transaction Permissions
Process Metric
Work Scenarios of CMPT
Conclusion and Future Work
References
Data Dependency Based Recovery Approaches in Survival Database Systems
Introduction
Related Work
Recovery Approaches in Survival Database Systems
Database and Transaction Theoretical Model
Transaction Logging Method
Data Dependency Based Recovery Approach Without Considering Blind Writes
Data Dependency Based Recovery Approach with Blind Writes
Performance Analyses
Conclusion
References
Usage-Centered Interface Design for Quality Improvement
Introduction
Related Works
Interface Design Model Based on Classification
Business Event Object
Task Object
Transaction Object
Form Object
Evaluation of Proposed Model
Features of Proposed Model
Criteria for VC
Conclusion
References
Description Logic Representation for Requirement Specification
Introduction and Related Works
Requirement Ontology for Requirement Specification
Description Logics
Representation of Requirement Specification and Reasoning in SHIQ
Queries in Description Logics
Conclusion
References
Ontologies and Software Engineering
Introduction
The Proposed Methodology
T-Model: Text Description Model
O-Model: Ontological Model
OL-Model: Ontologies Library Model
I-Model: Integrated Model
C-Model: Class Model
Conclusions
References
Epistemological and Ontological Representation in Software Engineering
Introduction
Empirical Research in Software Engineering
Epistemology in Empirical Software Engineering Research
Epistemology in Software Engineering
Problems with Empirical Software Engineering Research
Some Epistemological Results of the Empirical Research Approach to SE
Ontologies in Software Engineering
Conclusions
Exploiting Morpho-syntactic Features for Verb Sense Distinction in KorLex
Introduction
Related Work
Searching for Syntactic and Semantic Verb Sense Interface
Case Alternation in Korean
Verb Sense Distinction for KorLex Verb
Verb Sense Linking
Sense Distinction: Transitive/Passive Form
Sense Distinction for Korean Middle Verbs
Building KorLex Verb Hierarchy
Passive Verb Classification and Its Place in KorLex
Middle Verb Sense Distinction and Its Place in KorLex
Conclusion
References
Chinese Ancient-Modern Sentence Alignment
Introduction
Some Characteristics Ancient-Modern Chinese Bi-texts
The Approach to Sentence Alignment
Length and Mode
Hanzi Information
Experiments and Evaluations
Conclusions
References
A Language Modeling Approach to Sentiment Analysis
Introduction
Language Modeling Approach to Sentiment Classification
Using Kullback-Leibler Divergence for Sentiment Classification
Estimation for Model Parameters
Document Set and Experiments
Conclusion
References
Processing the Mixed Properties of Light Verb Constructions
Issues
A Typed Feature Structure Grammar: KPSG
Mixed Properties Within a Multiple Inheritance System
Argument Composition and the Syntax of the LVC
Common Noun Usages
An Implementation and Its Results
Concept-Based Question Analysis for an Efficient Document Ranking
Introduction
Concept-Based Question Analysis
Document Ranking
Experimental Results
Conclusion and Future Work
References
Learning Classifier System Approach to Natural Language Grammar Induction
Introduction
Grammar-Based Classifier System
The Experiments
References
Text Retrieval Oriented Auto-construction of Conceptual Relationship
Introduction
Our Contributions
Generating Patterns
Pattern Confidence
Experiments
Conclusion
References
Filtering Methods for Feature Selection in Web-Document Clustering
Introduction
Preliminaries
Filtering Methods for Feature Selection
Experimental Evaluations
Conclusion
References
A Korean Part-of-Speech Tagging System Using Resolution Rules for Individual Ambiguous Word
Introduction
Building Information for Disambiguation
Common-Rules
Resolution Rules for Individual Ambiguous Word
Statistical Information
Experiment
Conclusion
References
An Interactive User Interface for Text Display
Introduction
Web Design on Text Display
Proposed Interface
Author Index
Recommend Papers

Computational Science - ICCS 2007: 7th International Conference, Beijing China, May 27-30, 2007, Proceedings, Part II (Lecture Notes in Computer Science, 4488)
 3540725857, 9783540725855

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany

4488

Yong Shi Geert Dick van Albada Jack Dongarra Peter M.A. Sloot (Eds.)

Computational Science – ICCS 2007 7th International Conference Beijing, China, May 27 - 30, 2007 Proceedings, Part II

13

Volume Editors Yong Shi Graduate University of the Academy of Sciences Beijing 100080, China E-mail: [email protected] Geert Dick van Albada Peter M.A. Sloot University of Amsterdam, Section Computational Science 1098 SJ Amsterdam, The Netherlands E-mail: {dick, sloot}@science.uva.nl Jack Dongarra University of Tennessee, Computer Science Department Knoxville, TN 37996-3450, USA E-mail: [email protected]

Library of Congress Control Number: 2007927049 CR Subject Classification (1998): F, D, G, H, I.1, I.3, I.6, J, K.3, C.2-3 LNCS Sublibrary: SL 1 – Theoretical Computer Science and General Issues ISSN ISBN-10 ISBN-13

0302-9743 3-540-72585-7 Springer Berlin Heidelberg New York 978-3-540-72585-5 Springer Berlin Heidelberg New York

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com © Springer-Verlag Berlin Heidelberg 2007 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12065738 06/3180 543210

Preface

The Seventh International Conference on Computational Science (ICCS 2007) was held in Beijing, China, May 27-30, 2007. This was the continuation of previous conferences in the series: ICCS 2006 in Reading, UK; ICCS 2005 in Atlanta, Georgia, USA; ICCS 2004 in Krakow, Poland; ICCS 2003 held simultaneously at two locations in, Melbourne, Australia and St. Petersburg, Russia; ICCS 2002 in Amsterdam, The Netherlands; and ICCS 2001 in San Francisco, California, USA. Since the first conference in San Francisco, the ICCS series has become a major platform to promote the development of Computational Science. The theme of ICCS 2007 was “Advancing Science and Society through Computation.” It aimed to bring together researchers and scientists from mathematics and computer science as basic computing disciplines, researchers from various application areas who are pioneering the advanced application of computational methods to sciences such as physics, chemistry, life sciences, and engineering, arts and humanitarian fields, along with software developers and vendors, to discuss problems and solutions in the area, to identify new issues, and to shape future directions for research, as well as to help industrial users apply various advanced computational techniques. During the opening of ICCS 2007, Siwei Cheng (Vice-Chairman of the Standing Committee of the National People’s Congress of the People’s Republic of China and the Dean of the School of Management of the Graduate University of the Chinese Academy of Sciences) presented the welcome speech on behalf of the Local Organizing Committee, after which Hector Ruiz (President and CEO, AMD) made remarks on behalf of international computing industries in China. Seven keynote lectures were delivered by Vassil Alexandrov (Advanced Computing and Emerging Technologies, University of Reading, UK) - Efficient Scalable Algorithms for Large-Scale Computations; Hans Petter Langtangen (Simula Research Laboratory, Lysaker, Norway) - Computational Modelling of Huge Tsunamis from Asteroid Impacts; Jiawei Han (Department of Computer Science, University of Illinois at Urbana-Champaign, USA) - Research Frontiers in Advanced Data Mining Technologies and Applications; Ru-qian Lu (Institute of Mathematics, Chinese Academy of Sciences) - Knowledge Engineering and Knowledge Ware; Alessandro Vespignani (School of Informatics, Indiana University, USA) -Computational Epidemiology and Emergent Disease Forecast; David Keyes (Department of Applied Physics and Applied Mathematics, Columbia University) - Scalable Solver Infrastructure for Computational Science and Engineering; and Yves Robert (Ecole Normale Suprieure de Lyon , France) - Think Before Coding: Static Strategies (and Dynamic Execution) for Clusters and Grids. We would like to express our thanks to all of the invited and keynote speakers for their inspiring talks. In addition to the plenary sessions, the conference included 14 parallel oral sessions and 4 poster sessions. This year, we

VI

Preface

received more than 2,400 submissions for all tracks combined, out of which 716 were accepted. This includes 529 oral papers, 97 short papers, and 89 poster papers, spread over 35 workshops and a main track. For the main track we had 91 papers (80 oral papers and 11 short papers) in the proceedings, out of 360 submissions. We had some 930 people doing reviews for the conference, with 118 for the main track. Almost all papers received three reviews. The accepted papers are from more than 43 different countries and 48 different Internet top-level domains. The papers cover a large volume of topics in computational science and related areas, from multiscale physics to wireless networks, and from graph theory to tools for program development. We would like to thank all workshop organizers and the Program Committee for the excellent work in maintaining the conference’s standing for high-quality papers. We would like to express our gratitude to staff and graduates of the Chinese Academy of Sciences Research Center on Data Technology and Knowledge Economy and the Institute of Policy and Management for their hard work in support of ICCS 2007. We would like to thank the Local Organizing Committee and Local Arrangements Committee for their persistent and enthusiastic work towards the success of ICCS 2007. We owe special thanks to our sponsors, AMD, Springer; University of Nebraska at Omaha, USA and Graduate University of Chinese Academy of Sciences, for their generous support. ICCS 2007 was organized by the Chinese Academy of Sciences Research Center on Data Technology and Knowledge Economy, with support from the Section Computational Science at the Universiteit van Amsterdam and Innovative Computing Laboratory at the University of Tennessee, in cooperation with the Society for Industrial and Applied Mathematics (SIAM), the International Association for Mathematics and Computers in Simulation (IMACS), the Chinese Society for Management Modernization (CSMM), and the Chinese Society of Optimization, Overall Planning and Economical Mathematics (CSOOPEM). May 2007

Yong Shi

Organization

ICCS 2007 was organized by the Chinese Academy of Sciences Research Center on Data Technology and Knowledge Economy, with support from the Section Computational Science at the Universiteit van Amsterdam and Innovative Computing Laboratory at the University of Tennessee, in cooperation with the Society for Industrial and Applied Mathematics (SIAM), the International Association for Mathematics and Computers in Simulation (IMACS), and the Chinese Society for Management Modernization (CSMM).

Conference Chairs Conference Chair - Yong Shi (Chinese Academy of Sciences, China/University of Nebraska at Omaha USA) Program Chair - Dick van Albada (Universiteit van Amsterdam, The Netherlands) ICCS Series Overall Scientific Co-chair - Jack Dongarra (University of Tennessee, USA) ICCS Series Overall Scientific Chair - Peter M.A. Sloot (Universiteit van Amsterdam, The Netherlands)

Local Organizing Committee Weimin Zheng (Tsinghua University, Beijing, China) – Chair Hesham Ali (University of Nebraska at Omaha, USA) Chongfu Huang (Beijing Normal University, Beijing, China) Masato Koda (University of Tsukuba, Japan) Heeseok Lee (Korea Advanced Institute of Science and Technology, Korea) Zengliang Liu (Beijing University of Science and Technology, Beijing, China) Jen Tang (Purdue University, USA) Shouyang Wang (Academy of Mathematics and System Science, Chinese Academy of Sciences, Beijing, China) Weixuan Xu (Institute of Policy and Management, Chinese Academy of Sciences, Beijing, China) Yong Xue (Institute of Remote Sensing Applications, Chinese Academy of Sciences, Beijing, China) Ning Zhong (Maebashi Institute of Technology, USA) Hai Zhuge (Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China)

VIII

Organization

Local Arrangements Committee Weixuan Xu, Chair Yong Shi, Co-chair of events Benfu Lu, Co-chair of publicity Hongjin Yang, Secretary Jianping Li, Member Ying Liu, Member Jing He, Member Siliang Chen, Member Guanxiong Jiang, Member Nan Xiao, Member Zujin Deng, Member

Sponsoring Institutions AMD Springer World Scientific Publlishing University of Nebraska at Omaha, USA Graduate University of Chinese Academy of Sciences Institute of Policy and Management, Chinese Academy of Sciences Universiteit van Amsterdam

Program Committee J.H. Abawajy, Deakin University, Australia D. Abramson, Monash University, Australia V. Alexandrov, University of Reading, UK I. Altintas, San Diego Supercomputer Center, UCSD M. Antolovich, Charles Sturt University, Australia E. Araujo, Universidade Federal de Campina Grande, Brazil M.A. Baker, University of Reading, UK B. Balis, Krakow University of Science and Technology, Poland A. Benoit, LIP, ENS Lyon, France I. Bethke, University of Amsterdam, The Netherlands J.A.R. Blais, University of Calgary, Canada I. Brandic, University of Vienna, Austria J. Broeckhove, Universiteit Antwerpen, Belgium M. Bubak, AGH University of Science and Technology, Poland K. Bubendorfer, Victoria University of Wellington, Australia B. Cantalupo, DATAMAT S.P.A, Italy J. Chen Swinburne, University of Technology, Australia O. Corcho, University of Manchester, UK J.C. Cunha, Univ. Nova de Lisboa, Portugal

Organization

S. Date, Osaka University, Japan F. Desprez, INRIA, France T. Dhaene, University of Antwerp, Belgium I.T. Dimov, ACET, The University of Reading, UK J. Dongarra, University of Tennessee, USA F. Donno, CERN, Switzerland C. Douglas, University of Kentucky, USA G. Fox, Indiana University, USA W. Funika, Krakow University of Science and Technology, Poland H.J. Gardner, Australian National University, Australia G. Geethakumari, University of Hyderabad, India Y. Gorbachev, St. Petersburg State Polytechnical University, Russia A.M. Goscinski, Deakin University, Australia M. Govindaraju, Binghamton University, USA G.A. Gravvanis, Democritus University of Thrace, Greece D.J. Groen, University of Amsterdam, The Netherlands T. Gubala, ACC CYFRONET AGH, Krakow, Poland M. Hardt, FZK, Germany T. Heinis, ETH Zurich, Switzerland L. Hluchy, Institute of Informatics, Slovak Academy of Sciences, Slovakia A.G. Hoekstra, University of Amsterdam, The Netherlands W. Hoffmann, University of Amsterdam, The Netherlands C. Huang, Beijing Normal University Beijing, China M. Humphrey, University of Virginia, USA A. Iglesias, University of Cantabria, Spain H. Jin, Huazhong University of Science and Technology, China D. Johnson, ACET Centre, University of Reading, UK B.D. Kandhai, University of Amsterdam, The Netherlands S. Kawata, Utsunomiya University, Japan W.A. Kelly, Queensland University of Technology, Australia J. Kitowski, Inst.Comp.Sci. AGH-UST, Cracow, Poland M. Koda, University of Tsukuba Japan D. Kranzlm¨ uller, GUP, Joh. Kepler University Linz, Austria B. Kryza, Academic Computer Centre CYFRONET-AGH, Cracow, Poland M. Kunze, Forschungszentrum Karlsruhe (FZK), Germany D. Kurzyniec, Emory University, Atlanta, USA A. Lagana, University of Perugia, Italy J. Lee, KISTI Supercomputing Center, Korea C. Lee, Aerospace Corp., USA L. Lefevre, INRIA, France A. Lewis, Griffith University, Australia H.W. Lim, Royal Holloway, University of London, UK A. Lin, NCMIR/UCSD, USA P. Lu, University of Alberta, Canada M. Malawski, Institute of Computer Science AGH, Poland

IX

X

Organization

M. Mascagni, Florida State University, USA V. Maxville, Curtin Business School, Australia A.S. McGough, London e-Science Centre, UK E.D. Moreno, UEA-BENq, Manaus, Brazil J.T. Moscicki, Cern, Switzerland S. Naqvi, CoreGRID Network of Excellence, France P.O.A. Navaux, Universidade Federal do Rio Grande do Sul, Brazil Z. Nemeth, Computer and Automation Research Institute, Hungarian Academy of Science, Hungary J. Ni, University of Iowa, USA G. Norman, Joint Institute for High Temperatures of RAS, Russia ´ Nuall´ B. O ain, University of Amsterdam, The Netherlands C.W. Oosterlee, Centrum voor Wiskunde en Informatica, CWI, The Netherlands S. Orlando, Universit` a Ca’ Foscari, Venice, Italy M. Paprzycki, IBS PAN and SWPS, Poland M. Parashar, Rutgers University, USA L.M. Patnaik, Indian Institute of Science, India C.P. Pautasso, ETH Z¨ urich, Switzerland R. Perrott, Queen’s University, Belfast, UK V. Prasanna, University of Southern California, USA T. Priol, IRISA, France M.R. Radecki, Krakow University of Science and Technology, Poland M. Ram, C-DAC Bangalore Centre, India A. Rendell, Australian National University, Australia P. Rhodes, University of Mississippi, USA M. Riedel, Research Centre Juelich, Germany D. Rodr´ıguez Garc´ıa, University of Alcal´ a, Spain K. Rycerz, Krakow University of Science and Technology, Poland R. Santinelli, CERN, Switzerland J. Schneider, Technische Universit¨ at Berlin, Germany B. Schulze, LNCC, Brazil J. Seo, The University of Manchester, UK Y. Shi, Chinese Academy of Sciences, Beijing, China D. Shires, U.S. Army Research Laboratory, USA A.E. Solomonides, University of the West of England, Bristol, UK V. Stankovski, University of Ljubljana, Slovenia H. Stockinger, Swiss Institute of Bioinformatics, Switzerland A. Streit, Forschungszentrum J¨ ulich, Germany H. Sun, Beihang University, China R. Tadeusiewicz, AGH University of Science and Technology, Poland J. Tang, Purdue University USA M. Taufer, University of Texas El Paso, USA C. Tedeschi, LIP-ENS Lyon, France A. Thandavan, ACET Center, University of Reading, UK A. Tirado-Ramos, University of Amsterdam, The Netherlands

Organization

P. Tvrdik, Czech Technical University Prague, Czech Republic G.D. van Albada, Universiteit van Amsterdam, The Netherlands F. van Lingen, California Institute of Technology, USA J. Vigo-Aguiar, University of Salamanca, Spain D.W. Walker, Cardiff University, UK C.L. Wang, University of Hong Kong, China A.L. Wendelborn, University of Adelaide, Australia Y. Xue, Chinese Academy of Sciences, China L.T. Yang, St. Francis Xavier University, Canada C.T. Yang, Tunghai University, Taichung, Taiwan J. Yu, The University of Melbourne, Australia Y. Zheng, Zhejiang University, China W. Zheng, Tsinghua University, Beijing, China L. Zhu, University of Florida, USA A. Zomaya, The University of Sydney, Australia E.V. Zudilova-Seinstra, University of Amsterdam, The Netherlands

Reviewers J.H. Abawajy D. Abramson A. Abran P. Adriaans W. Ahn R. Akbani K. Akkaya R. Albert M. Aldinucci V.N. Alexandrov B. Alidaee I. Altintas K. Altmanninger S. Aluru S. Ambroszkiewicz L. Anido K. Anjyo C. Anthes M. Antolovich S. Antoniotti G. Antoniu H. Arabnia E. Araujo E. Ardeleanu J. Aroba J. Astalos

B. Autin M. Babik G. Bai E. Baker M.A. Baker S. Balfe B. Balis W. Banzhaf D. Bastola S. Battiato M. Baumgarten M. Baumgartner P. Beckaert A. Belloum O. Belmonte A. Belyaev A. Benoit G. Bergantz J. Bernsdorf J. Berthold I. Bethke I. Bhana R. Bhowmik M. Bickelhaupt J. Bin Shyan J. Birkett

J.A.R. Blais A. Bode B. Boghosian S. Bolboaca C. Bothorel A. Bouteiller I. Brandic S. Branford S.J. Branford R. Braungarten R. Briggs J. Broeckhove W. Bronsvoort A. Bruce C. Brugha Y. Bu K. Bubendorfer I. Budinska G. Buemi B. Bui H.J. Bungartz A. Byrski M. Cai Y. Cai Y.Q. Cai Z.Y. Cai

XI

XII

Organization

B. Cantalupo K. Cao M. Cao F. Capkovic A. Cepulkauskas K. Cetnarowicz Y. Chai P. Chan G.-L. Chang S.C. Chang W.A. Chaovalitwongse P.K. Chattaraj C.-K. Chen E. Chen G.Q. Chen G.X. Chen J. Chen J. Chen J.J. Chen K. Chen Q.S. Chen W. Chen Y. Chen Y.Y. Chen Z. Chen G. Cheng X.Z. Cheng S. Chiu K.E. Cho Y.-Y. Cho B. Choi J.K. Choi D. Choinski D.P. Chong B. Chopard M. Chover I. Chung M. Ciglan B. Cogan G. Cong J. Corander J.C. Corchado O. Corcho J. Cornil H. Cota de Freitas

E. Coutinho J.J. Cuadrado-Gallego Y.F. Cui J.C. Cunha V. Curcin A. Curioni R. da Rosa Righi S. Dalai M. Daneva S. Date P. Dazzi S. de Marchi V. Debelov E. Deelman J. Della Dora Y. Demazeau Y. Demchenko H. Deng X.T. Deng Y. Deng M. Mat Deris F. Desprez M. Dewar T. Dhaene Z.R. Di G. di Biasi A. Diaz Guilera P. Didier I.T. Dimov L. Ding G.D. Dobrowolski T. Dokken J.J. Dolado W. Dong Y.-L. Dong J. Dongarra F. Donno C. Douglas G.J. Garcke R.P. Mundani R. Drezewski D. Du B. Duan J.F. Dufourd H. Dun

C. Earley P. Edmond T. Eitrich A. El Rhalibi T. Ernst V. Ervin D. Estrin L. Eyraud-Dubois J. Falcou H. Fang Y. Fang X. Fei Y. Fei R. Feng M. Fernandez K. Fisher C. Fittschen G. Fox F. Freitas T. Friesz K. Fuerlinger M. Fujimoto T. Fujinami W. Funika T. Furumura A. Galvez L.J. Gao X.S. Gao J.E. Garcia H.J. Gardner M. Garre G. Garsva F. Gava G. Geethakumari M. Geimer J. Geiser J.-P. Gelas A. Gerbessiotis M. Gerndt S. Gimelshein S.G. Girdzijauskas S. Girtelschmid Z. Gj C. Glasner A. Goderis

Organization

D. Godoy J. Golebiowski S. Gopalakrishnan Y. Gorbachev A.M. Goscinski M. Govindaraju E. Grabska G.A. Gravvanis C.H. Grelck D.J. Groen L. Gross P. Gruer A. Grzech J.F. Gu Y. Guang Xue T. Gubala V. Guevara-Masis C.H. Guo X. Guo Z.Q. Guo L. Guohui C. Gupta I. Gutman A. Haffegee K. Han M. Hardt A. Hasson J. He J. He K. He T. He J. He M.R. Head P. Heinzlreiter H. Chojnacki J. Heo S. Hirokawa G. Hliniak L. Hluchy T.B. Ho A. Hoekstra W. Hoffmann A. Hoheisel J. Hong Z. Hong

D. Horvath F. Hu L. Hu X. Hu X.H. Hu Z. Hu K. Hua H.W. Huang K.-Y. Huang L. Huang L. Huang M.S. Huang S. Huang T. Huang W. Huang Y. Huang Z. Huang Z. Huang B. Huber E. Hubo J. Hulliger M. Hultell M. Humphrey P. Hurtado J. Huysmans T. Ida A. Iglesias K. Iqbal D. Ireland N. Ishizawa I. Lukovits R. Jamieson J.K. Jan P. Janderka M. Jankowski L. J¨ antschi S.J.K. Jensen N.J. Jeon T.H. Jeon T. Jeong H. Ji X. Ji D.Y. Jia C. Jiang H. Jiang

XIII

M.J. Jiang P. Jiang W. Jiang Y. Jiang H. Jin J. Jin L. Jingling G.-S. Jo D. Johnson J. Johnstone J.J. Jung K. Juszczyszyn J.A. Kaandorp M. Kabelac B. Kadlec R. Kakkar C. Kameyama B.D. Kandhai S. Kandl K. Kang S. Kato S. Kawata T. Kegl W.A. Kelly J. Kennedy G. Khan J.B. Kido C.H. Kim D.S. Kim D.W. Kim H. Kim J.G. Kim J.H. Kim M. Kim T.H. Kim T.W. Kim P. Kiprof R. Kirner M. Kisiel-Dorohinicki J. Kitowski C.R. Kleijn M. Kluge upfer A. Kn¨ I.S. Ko Y. Ko

XIV

Organization

R. Kobler B. Koblitz G.A. Kochenberger M. Koda T. Koeckerbauer M. Koehler I. Kolingerova V. Korkhov T. Korkmaz L. Kotulski G. Kou J. Kozlak M. Krafczyk D. Kranzlm¨ uller B. Kryza V.V. Krzhizhanovskaya M. Kunze D. Kurzyniec E. Kusmierek S. Kwang Y. Kwok F. Kyriakopoulos H. Labiod A. Lagana H. Lai S. Lai Z. Lan G. Le Mahec B.G. Lee C. Lee H.K. Lee J. Lee J. Lee J.H. Lee S. Lee S.Y. Lee V. Lee Y.H. Lee L. Lefevre L. Lei F. Lelj A. Lesar D. Lesthaeghe Z. Levnajic A. Lewis

A. Li D. Li D. Li E. Li J. Li J. Li J.P. Li M. Li P. Li X. Li X.M. Li X.S. Li Y. Li Y. Li J. Liang L. Liang W.K. Liao X.F. Liao G.G. Lim H.W. Lim S. Lim A. Lin I.C. Lin I-C. Lin Y. Lin Z. Lin P. Lingras C.Y. Liu D. Liu D.S. Liu E.L. Liu F. Liu G. Liu H.L. Liu J. Liu J.C. Liu R. Liu S.Y. Liu W.B. Liu X. Liu Y. Liu Y. Liu Y. Liu Y. Liu Y.J. Liu

Y.Z. Liu Z.J. Liu S.-C. Lo R. Loogen B. L´opez A. L´ opez Garc´ıa de Lomana F. Loulergue G. Lu J. Lu J.H. Lu M. Lu P. Lu S. Lu X. Lu Y.C. Lu C. Lursinsap L. Ma M. Ma T. Ma A. Macedo N. Maillard M. Malawski S. Maniccam S.S. Manna Z.M. Mao M. Mascagni E. Mathias R.C. Maurya V. Maxville A.S. McGough R. Mckay T.-G. MCKenzie K. Meenal R. Mehrotra M. Meneghin F. Meng M.F.J. Meng E. Merkevicius M. Metzger Z. Michalewicz J. Michopoulos J.-C. Mignot R. mikusauskas H.Y. Ming

Organization

G. Miranda Valladares M. Mirua G.P. Miscione C. Miyaji A. Miyoshi J. Monterde E.D. Moreno G. Morra J.T. Moscicki H. Moshkovich V.M. Moskaliova G. Mounie C. Mu A. Muraru H. Na K. Nakajima Y. Nakamori S. Naqvi S. Naqvi R. Narayanan A. Narjess A. Nasri P. Navaux P.O.A. Navaux M. Negoita Z. Nemeth L. Neumann N.T. Nguyen J. Ni Q. Ni K. Nie G. Nikishkov V. Nitica W. Nocon A. Noel G. Norman ´ Nuall´ B. O ain N. O’Boyle J.T. Oden Y. Ohsawa H. Okuda D.L. Olson C.W. Oosterlee V. Oravec S. Orlando

F.R. Ornellas A. Ortiz S. Ouyang T. Owens S. Oyama B. Ozisikyilmaz A. Padmanabhan Z. Pan Y. Papegay M. Paprzycki M. Parashar K. Park M. Park S. Park S.K. Pati M. Pauley C.P. Pautasso B. Payne T.C. Peachey S. Pelagatti F.L. Peng Q. Peng Y. Peng N. Petford A.D. Pimentel W.A.P. Pinheiro J. Pisharath G. Pitel D. Plemenos S. Pllana S. Ploux A. Podoleanu M. Polak D. Prabu B.B. Prahalada Rao V. Prasanna P. Praxmarer V.B. Priezzhev T. Priol T. Prokosch G. Pucciani D. Puja P. Puschner L. Qi D. Qin

H. Qin K. Qin R.X. Qin X. Qin G. Qiu X. Qiu J.Q. Quinqueton M.R. Radecki S. Radhakrishnan S. Radharkrishnan M. Ram S. Ramakrishnan P.R. Ramasami P. Ramsamy K.R. Rao N. Ratnakar T. Recio K. Regenauer-Lieb R. Rejas F.Y. Ren A. Rendell P. Rhodes J. Ribelles M. Riedel R. Rioboo Y. Robert G.J. Rodgers A.S. Rodionov D. Rodr´ıguez Garc´ıa C. Rodriguez Leon F. Rogier G. Rojek L.L. Rong H. Ronghuai H. Rosmanith F.-X. Roux R.K. Roy U. R¨ ude M. Ruiz T. Ruofeng K. Rycerz M. Ryoke F. Safaei T. Saito V. Sakalauskas

XV

XVI

Organization

L. Santillo R. Santinelli K. Sarac H. Sarafian M. Sarfraz V.S. Savchenko M. Sbert R. Schaefer D. Schmid J. Schneider M. Schoeberl S.-B. Scholz B. Schulze S.R. Seelam B. Seetharamanjaneyalu J. Seo K.D. Seo Y. Seo O.A. Serra A. Sfarti H. Shao X.J. Shao F.T. Sheldon H.Z. Shen S.L. Shen Z.H. Sheng H. Shi Y. Shi S. Shin S.Y. Shin B. Shirazi D. Shires E. Shook Z.S. Shuai M.A. Sicilia M. Simeonidis K. Singh M. Siqueira W. Sit M. Skomorowski A. Skowron P.M.A. Sloot M. Smolka B.S. Sniezynski H.Z. Sojka

A.E. Solomonides C. Song L.J. Song S. Song W. Song J. Soto A. Sourin R. Srinivasan V. Srovnal V. Stankovski P. Sterian H. Stockinger D. Stokic A. Streit B. Strug P. Stuedi A. St¨ umpel S. Su V. Subramanian P. Suganthan D.A. Sun H. Sun S. Sun Y.H. Sun Z.G. Sun M. Suvakov H. Suzuki D. Szczerba L. Szecsi L. Szirmay-Kalos R. Tadeusiewicz B. Tadic T. Takahashi S. Takeda J. Tan H.J. Tang J. Tang S. Tang T. Tang X.J. Tang J. Tao M. Taufer S.F. Tayyari C. Tedeschi J.C. Teixeira

F. Terpstra C. Te-Yi A.Y. Teymorian D. Thalmann A. Thandavan L. Thompson S. Thurner F.Z. Tian Y. Tian Z. Tianshu A. Tirado-Ramos A. Tirumala P. Tjeerd W. Tong A.S. Tosun A. Tropsha C. Troyer K.C.K. Tsang A.C. Tsipis I. Tsutomu A. Turan P. Tvrdik U. Ufuktepe V. Uskov B. Vaidya E. Valakevicius I.A. Valuev S. Valverde G.D. van Albada R. van der Sman F. van Lingen A.J.C. Varandas C. Varotsos D. Vasyunin R. Veloso J. Vigo-Aguiar J. Vill` a i Freixa V. Vivacqua E. Vumar R. Walentkynski D.W. Walker B. Wang C.L. Wang D.F. Wang D.H. Wang

Organization

F. Wang F.L. Wang H. Wang H.G. Wang H.W. Wang J. Wang J. Wang J. Wang J. Wang J.H. Wang K. Wang L. Wang M. Wang M.Z. Wang Q. Wang Q.Q. Wang S.P. Wang T.K. Wang W. Wang W.D. Wang X. Wang X.J. Wang Y. Wang Y.Q. Wang Z. Wang Z.T. Wang A. Wei G.X. Wei Y.-M. Wei X. Weimin D. Weiskopf B. Wen A.L. Wendelborn I. Wenzel A. Wibisono A.P. Wierzbicki R. Wism¨ uller F. Wolf C. Wu C. Wu F. Wu G. Wu J.N. Wu X. Wu X.D. Wu

Y. Wu Z. Wu B. Wylie M. Xavier Py Y.M. Xi H. Xia H.X. Xia Z.R. Xiao C.F. Xie J. Xie Q.W. Xie H. Xing H.L. Xing J. Xing K. Xing L. Xiong M. Xiong S. Xiong Y.Q. Xiong C. Xu C.-H. Xu J. Xu M.W. Xu Y. Xu G. Xue Y. Xue Z. Xue A. Yacizi B. Yan N. Yan N. Yan W. Yan H. Yanami C.T. Yang F.P. Yang J.M. Yang K. Yang L.T. Yang L.T. Yang P. Yang X. Yang Z. Yang W. Yanwen S. Yarasi D.K.Y. Yau

XVII

P.-W. Yau M.J. Ye G. Yen R. Yi Z. Yi J.G. Yim L. Yin W. Yin Y. Ying S. Yoo T. Yoshino W. Youmei Y.K. Young-Kyu Han J. Yu J. Yu L. Yu Z. Yu Z. Yu W. Yu Lung X.Y. Yuan W. Yue Z.Q. Yue D. Yuen T. Yuizono J. Zambreno P. Zarzycki M.A. Zatevakhin S. Zeng A. Zhang C. Zhang D. Zhang D.L. Zhang D.Z. Zhang G. Zhang H. Zhang H.R. Zhang H.W. Zhang J. Zhang J.J. Zhang L.L. Zhang M. Zhang N. Zhang P. Zhang P.Z. Zhang Q. Zhang

XVIII

Organization

S. Zhang W. Zhang W. Zhang Y.G. Zhang Y.X. Zhang Z. Zhang Z.W. Zhang C. Zhao H. Zhao H.K. Zhao H.P. Zhao J. Zhao M.H. Zhao W. Zhao

Z. Zhao L. Zhen B. Zheng G. Zheng W. Zheng Y. Zheng W. Zhenghong P. Zhigeng W. Zhihai Y. Zhixia A. Zhmakin C. Zhong X. Zhong K.J. Zhou

L.G. Zhou X.J. Zhou X.L. Zhou Y.T. Zhou H.H. Zhu H.L. Zhu L. Zhu X.Z. Zhu Z. Zhu M. Zhu. J. Zivkovic A. Zomaya E.V. Zudilova-Seinstra

Workshop Organizers Sixth International Workshop on Computer Graphics and Geometric Modelling A. Iglesias, University of Cantabria, Spain Fifth International Workshop on Computer Algebra Systems and Applications A. Iglesias, University of Cantabria, Spain, A. Galvez, University of Cantabria, Spain PAPP 2007 - Practical Aspects of High-Level Parallel Programming (4th International Workshop) A. Benoit, ENS Lyon, France F. Loulerge, LIFO, Orlans, France International Workshop on Collective Intelligence for Semantic and Knowledge Grid (CISKGrid 2007) N.T. Nguyen, Wroclaw University of Technology, Poland J.J. Jung, INRIA Rhˆ one-Alpes, France K. Juszczyszyn, Wroclaw University of Technology, Poland Simulation of Multiphysics Multiscale Systems, 4th International Workshop V.V. Krzhizhanovskaya, Section Computational Science, University of Amsterdam, The Netherlands A.G. Hoekstra, Section Computational Science, University of Amsterdam, The Netherlands

Organization

XIX

S. Sun, Clemson University, USA J. Geiser, Humboldt University of Berlin, Germany 2nd Workshop on Computational Chemistry and Its Applications (2nd CCA) P.R. Ramasami, University of Mauritius Efficient Data Management for HPC Simulation Applications R.-P. Mundani, Technische Universit¨ at M¨ unchen, Germany J. Abawajy, Deakin University, Australia M. Mat Deris, Tun Hussein Onn College University of Technology, Malaysia Real Time Systems and Adaptive Applications (RTSAA-2007) J. Hong, Soongsil University, South Korea T. Kuo, National Taiwan University, Taiwan The International Workshop on Teaching Computational Science (WTCS 2007) L. Qi, Department of Information and Technology, Central China Normal University, China W. Yanwen, Department of Information and Technology, Central China Normal University, China W. Zhenghong, East China Normal University, School of Information Science and Technology, China GeoComputation Y. Xue, IRSA, China Risk Analysis C.F. Huang, Beijing Normal University, China Advanced Computational Approaches and IT Techniques in Bioinformatics M.A. Pauley, University of Nebraska at Omaha, USA H.A. Ali, University of Nebraska at Omaha, USA Workshop on Computational Finance and Business Intelligence Y. Shi, Chinese Acedemy of Scienes, China S.Y. Wang, Academy of Mathematical and System Sciences, Chinese Academy of Sciences, China X.T. Deng, Department of Computer Science, City University of Hong Kong, China

XX

Organization

Collaborative and Cooperative Environments C. Anthes, Institute of Graphics and Parallel Processing, JKU, Austria V.N. Alexandrov, ACET Centre, The University of Reading, UK D. Kranzlm¨ uller, Institute of Graphics and Parallel Processing, JKU, Austria J. Volkert, Institute of Graphics and Parallel Processing, JKU, Austria Tools for Program Development and Analysis in Computational Science A. Kn¨ upfer, ZIH, TU Dresden, Germany A. Bode, TU Munich, Germany D. Kranzlm¨ uller, Institute of Graphics and Parallel Processing, JKU, Austria J. Tao, CAPP, University of Karlsruhe, Germany R. Wissm¨ uller FB12, BSVS, University of Siegen, Germany J. Volkert, Institute of Graphics and Parallel Processing, JKU, Austria Workshop on Mining Text, Semi-structured, Web or Multimedia Data (WMTSWMD 2007) G. Kou, Thomson Corporation, R&D, USA Y. Peng, Omnium Worldwide, Inc., USA J.P. Li, Institute of Policy and Management, Chinese Academy of Sciences, China 2007 International Workshop on Graph Theory, Algorithms and Its Applications in Computer Science (IWGA 2007) M. Li, Dalian University of Technology, China 2nd International Workshop on Workflow Systems in e-Science (WSES 2007) Z. Zhao, University of Amsterdam, The Netherlands A. Belloum, University of Amsterdam, The Netherlands 2nd International Workshop on Internet Computing in Science and Engineering (ICSE 2007) J. Ni, The University of Iowa, USA Workshop on Evolutionary Algorithms and Evolvable Systems (EAES 2007) B. Zheng, College of Computer Science, South-Central University for Nationalities, Wuhan, China Y. Li, State Key Lab. of Software Engineering, Wuhan University, Wuhan, China J. Wang, College of Computer Science, South-Central University for Nationalities, Wuhan, China L. Ding, State Key Lab. of Software Engineering, Wuhan University, Wuhan, China

Organization

XXI

Wireless and Mobile Systems 2007 (WMS 2007) H. Choo, Sungkyunkwan University, South Korea WAFTS: WAvelets, FracTals, Short-Range Phenomena — Computational Aspects and Applications C. Cattani, University of Salerno, Italy C. Toma, Polythecnica, Bucharest, Romania Dynamic Data-Driven Application Systems - DDDAS 2007 F. Darema, National Science Foundation, USA The Seventh International Workshop on Meta-synthesis and Complex Systems (MCS 2007) X.J. Tang, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, China J.F. Gu, Institute of Systems Science, Chinese Academy of Sciences, China Y. Nakamori, Japan Advanced Institute of Science and Technology, Japan H.C. Wang, Shanghai Jiaotong University, China The 1st International Workshop on Computational Methods in Energy Economics L. Yu, City University of Hong Kong, China J. Li, Chinese Academy of Sciences, China D. Qin, Guangdong Provincial Development and Reform Commission, China High-Performance Data Mining Y. Liu, Data Technology and Knowledge Economy Research Center, Chinese Academy of Sciences, China A. Choudhary, Electrical and Computer Engineering Department, Northwestern University, USA S. Chiu, Department of Computer Science, College of Engineering, Idaho State University, USA Computational Linguistics in Human–Computer Interaction H. Ji, Sungkyunkwan University, South Korea Y. Seo, Chungbuk National University, South Korea H. Choo, Sungkyunkwan University, South Korea Intelligent Agents in Computing Systems K. Cetnarowicz, Department of Computer Science, AGH University of Science and Technology, Poland R. Schaefer, Department of Computer Science, AGH University of Science and Technology, Poland

XXII

Organization

Networks: Theory and Applications B. Tadic, Jozef Stefan Institute, Ljubljana, Slovenia S. Thurner, COSY, Medical University Vienna, Austria Workshop on Computational Science in Software Engineering D. Rodrguez, University of Alcala, Spain J.J. Cuadrado-Gallego, University of Alcala, Spain International Workshop on Advances in Computational Geomechanics and Geophysics (IACGG 2007) H.L. Xing, The University of Queensland and ACcESS Major National Research Facility, Australia J.H. Wang, Shanghai Jiao Tong University, China 2nd International Workshop on Evolution Toward Next-Generation Internet (ENGI) Y. Cui, Tsinghua University, China Parallel Monte Carlo Algorithms for Diverse Applications in a Distributed Setting V.N. Alexandrov, ACET Centre, The University of Reading, UK The 2007 Workshop on Scientific Computing in Electronics Engineering (WSCEE 2007) Y. Li, National Chiao Tung University, Taiwan High-Performance Networked Media and Services 2007 (HiNMS 2007) I.S. Ko, Dongguk University, South Korea Y.J. Na, Honam University, South Korea

Table of Contents – Part II

Resolving Occlusion Method of Virtual Object in Simulation Using Snake and Picking Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . JeongHee Cha, GyeYoung Kim, and HyungIl Choi

1

Graphics Hardware-Based Level-Set Method for Interactive Segmentation and Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Helen Hong and Seongjin Park

9

Parameterization of Quadrilateral Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . Li Liu, CaiMing Zhang, and Frank Cheng

17

Pose Insensitive 3D Retrieval by Poisson Shape Histogram . . . . . . . . . . . . Pan Xiang, Chen Qi Hua, Fang Xin Gang, and Zheng Bo Chuan

25

Point-Sampled Surface Simulation Based on Mass-Spring System . . . . . . . Zhixun Su, Xiaojie Zhou, Xiuping Liu, Fengshan Liu, and Xiquan Shi

33

Sweeping Surface Generated by a Class of Generalized Quasi-cubic Interpolation Spline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benyue Su and Jieqing Tan

41

An Artificial Immune System Approach for B-Spline Surface Approximation Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ¨ ˙sler Erkan Ulker and Veysi I¸

49

Implicit Surface Reconstruction from Scattered Point Data with Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jun Yang, Zhengning Wang, Changqian Zhu, and Qiang Peng

57

The Shannon Entropy-Based Node Placement for Enrichment and Simplification of Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vladimir Savchenko, Maria Savchenko, Olga Egorova, and Ichiro Hagiwara

65

Parameterization of 3D Surface Patches by Straightest Distances . . . . . . . Sungyeol Lee and Haeyoung Lee

73

Facial Expression Recognition Based on Emotion Dimensions on Manifold Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Young-suk Shin

81

AI Framework for Decision Modeling in Behavioral Animation of Virtual Avatars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Iglesias and F. Luengo

89

XXIV

Table of Contents – Part II

Studies on Shape Feature Combination and Efficient Categorization of 3D Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tianyang Lv, Guobao Liu, Jiming Pang, and Zhengxuan Wang

97

A Generalised-Mutual-Information-Based Oracle for Hierarchical Radiosity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jaume Rigau, Miquel Feixas, and Mateu Sbert

105

Rendering Technique for Colored Paper Mosaic . . . . . . . . . . . . . . . . . . . . . . Youngsup Park, Sanghyun Seo, YongJae Gi, Hanna Song, and Kyunghyun Yoon

114

Real-Time Simulation of Surface Gravity Ocean Waves Based on the TMA Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Namkyung Lee, Nakhoon Baek, and Kwan Woo Ryu

122

Determining Knots with Quadratic Polynomial Precision . . . . . . . . . . . . . . Zhang Caiming, Ji Xiuhua, and Liu Hui

130

Interactive Cartoon Rendering and Sketching of Clouds and Smoke . . . . . ´ Eduardo J. Alvarez, Celso Campos, Silvana G. Meire, Ricardo Quir´ os, Joaquin Huerta, and Michael Gould

138

Spherical Binary Images Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liu Wei and He Yuanjun

146

Dynamic Data Path Prediction in Network Virtual Environment . . . . . . . Sun-Hee Song, Seung-Moon Jeong, Gi-Taek Hur, and Sang-Dong Ra

150

Modeling Inlay/Onlay Prostheses with Mesh Deformation Techniques . . . Kwan-Hee Yoo, Jong-Sung Ha, and Jae-Soo Yoo

154

Automatic Generation of Virtual Computer Rooms on the Internet Using X3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aybars U˘gur and Tahir Emre Kalaycı

158

Stained Glass Rendering with Smooth Tile Boundary . . . . . . . . . . . . . . . . . SangHyun Seo, HoChang Lee, HyunChul Nah, and KyungHyun Yoon

162

Guaranteed Adaptive Antialiasing Using Interval Arithmetic . . . . . . . . . . Jorge Fl´ orez, Mateu Sbert, Miguel A. Sainz, and Josep Veh´ı

166

Restricted Non-cooperative Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seth J. Chandler

170

A New Application of CAS to LATEX Plottings . . . . . . . . . . . . . . . . . . . . . . . Masayoshi Sekiguchi, Masataka Kaneko, Yuuki Tadokoro, Satoshi Yamashita, and Setsuo Takato

178

Table of Contents – Part II

XXV

JMathNorm: A Database Normalization Tool Using Mathematica . . . . . . Ali Yazici and Ziya Karakaya

186

Symbolic Manipulation of Bspline Basis Functions with Mathematica . . . A. Iglesias, R. Ipanaqu´e, and R.T. Urbina

194

Rotating Capacitor and a Transient Electric Network . . . . . . . . . . . . . . . . . Haiduke Sarafian and Nenette Sarafian

203

Numerical-Symbolic Matlab Program for the Analysis of Three-Dimensional Chaotic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Akemi G´ alvez

211

Safety of Recreational Water Slides: Numerical Estimation of the Trajectory, Velocities and Accelerations of Motion of the Users . . . . . . . . Piotr Szczepaniak and Ryszard Walenty´ nski

219

Computing Locus Equations for Standard Dynamic Geometry Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Francisco Botana, Miguel A. Ab´ anades, and Jes´ us Escribano

227

Symbolic Computation of Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andres Iglesias and Sinan Kapcak

235

Dynaput: Dynamic Input Manipulations for 2D Structures of Mathematical Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deguchi Hiroaki

243

On the Virtues of Generic Programming for Symbolic Computation . . . . ´ Xin Li, Marc Moreno Maza, and Eric Schost

251

Semi-analytical Approach for Analyzing Vibro-Impact Systems . . . . . . . . ˇ Algimantas Cepulkauskas, Regina Kulvietien˙e, Genadijus Kulvietis, and Jurate Mikucioniene

259

Formal Verification of Analog and Mixed Signal Designs in Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mohamed H. Zaki, Ghiath Al-Sammane, and Sofi`ene Tahar Efficient Computations of Irredundant Triangular Decompositions with the RegularChains Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changbo Chen, Fran¸cois Lemaire, Marc Moreno Maza, Wei Pan, and Yuzhen Xie Characterisation of the Surfactant Shell Stabilising Calcium Carbonate Dispersions in Overbased Detergent Additives: Molecular Modelling and Spin-Probe-ESR Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Francesco Frigerio and Luciano Montanari

263

268

272

XXVI

Table of Contents – Part II

Hydrogen Adsorption and Penetration of Cx (x=58-62) Fullerenes with Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xin Yue, Jijun Zhao, and Jieshan Qiu

280

Ab Initio and DFT Investigations of the Mechanistic Pathway of Singlet Bromocarbenes Insertion into C-H Bonds of Methane and Ethane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Ramalingam, K. Ramasami, P. Venuvanalingam, and J. Swaminathan

288

Theoretical Gas Phase Study of the Gauche and Trans Conformers of 1-Bromo-2-Chloroethane and Solvent Effects . . . . . . . . . . . . . . . . . . . . . . . . Ponnadurai Ramasami

296

Dynamics Simulation of Conducting Polymer Interchain Interaction Effects on Polaron Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jos´e Rildo de Oliveira Queiroz and Geraldo Magela e Silva

304

Cerium (III) Complexes Modeling with Sparkle/PM3 . . . . . . . . . . . . . . . . . Alfredo Mayall Simas, Ricardo Oliveira Freire, and Gerd Bruno Rocha

312

The Design of Blue Emitting Materials Based on Spirosilabifluorene Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Miao Sun, Ben Niu, and Jingping Zhang

319

Regulative Effect of Water Molecules on the Switches of Guanine-Cytosine (GC) Watson-Crick Pair . . . . . . . . . . . . . . . . . . . . . . . . . . Hongqi Ai, Xian Peng, Yun Li, and Chong Zhang

327

Energy Partitioning Analysis of the Chemical Bonds in mer-Mq3 (M = AlIII , GaIII , InIII , TlIII ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ruihai Cui and Jingping Zhang

331

Ab Initio Quantum Chemical Studies of Six-Center Bond Exchange Reactions Among Halogen and Halogen Halide Molecules . . . . . . . . . . . . . I. Noorbatcha, B. Arifin, and S.M. Zain

335

Comparative Analysis of the Interaction Networks of HIV-1 and Human Proteins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kyungsook Han and Byungkyu Park

339

Protein Classification from Protein-Domain and Gene-Ontology Annotation Information Using Formal Concept Analysis . . . . . . . . . . . . . . Mi-Ryung Han, Hee-Joon Chung, Jihun Kim, Dong-Young Noh, and Ju Han Kim A Supervised Classifier Based on Artificial Immune System . . . . . . . . . . . . Lingxi Peng, Yinqiao Peng, Xiaojie Liu, Caiming Liu, Jinquan Zeng, Feixian Sun, and Zhengtian Lu

347

355

Table of Contents – Part II

Ab-origin: An Improved Tool of Heavy Chain Rearrangement Analysis for Human Immunoglobulin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaojing Wang, Wu Wei, SiYuan Zheng, Z.W. Cao, and Yixue Li Analytically Tuned Simulated Annealing Applied to the Protein Folding Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juan Frausto-Solis, E.F. Rom´ an, David Romero, Xavier Soberon, and Ernesto Li˜ n´ an-Garc´ıa Training the Hidden Vector State Model from Un-annotated Corpus . . . . Deyu Zhou, Yulan He, and Chee Keong Kwoh Using Computer Simulation to Understand Mutation Accumulation Dynamics and Genetic Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . John Sanford, John Baumgardner, Wes Brewer, Paul Gibson, and Walter ReMine

XXVII

363

370

378

386

An Object Model Based Repository for Biological Pathways Using XML Database Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Keyuan Jiang

393

Protein Folding Simulation with New Move Set in 3D Lattice Model . . . . X.-M. Li

397

A Dynamic Committee Scheme on Multiple-Criteria Linear Programming Classification Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Meihong Zhu, Yong Shi, Aihua Li, and Jing He

401

Kimberlites Identification by Classification Methods . . . . . . . . . . . . . . . . . . Yaohui Chai, Aihua Li, Yong Shi, Jing He, and Keliang Zhang

409

A Fast Method for Pricing Early-Exercise Options with the FFT . . . . . . . R. Lord, F. Fang, F. Bervoets, and C.W. Oosterlee

415

Neural-Network-Based Fuzzy Group Forecasting with Application to Foreign Exchange Rates Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lean Yu, Kin Keung Lai, and Shouyang Wang

423

Credit Risk Evaluation Using Support Vector Machine with Mixture of Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liwei Wei, Jianping Li, and Zhenyu Chen

431

Neuro-discriminate Model for the Forecasting of Changes of Companies Financial Standings on the Basis of Self-organizing Maps . . . . . . . . . . . . . . Egidijus Merkeviˇcius, Gintautas Garˇsva, and Rimvydas Simutis

439

A New Computing Method for Greeks Using Stochastic Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Masato Koda

447

XXVIII

Table of Contents – Part II

Application of Neural Networks for Foreign Exchange Rates Forecasting with Noise Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wei Huang, Kin Keung Lai, and Shouyang Wang

455

An Experiment with Fuzzy Sets in Data Mining . . . . . . . . . . . . . . . . . . . . . David L. Olson, Helen Moshkovich, and Alexander Mechitov

462

An Application of Component-Wise Iterative Optimization to Feed-Forward Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yachen Lin

470

ERM-POT Method for Quantifying Operational Risk for Chinese Commercial Banks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fanjun Meng, Jianping Li, and Lijun Gao

478

Building Behavior Scoring Model Using Genetic Algorithm and Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defu Zhang, Qingshan Chen, and Lijun Wei

482

An Intelligent CRM System for Identifying High-Risk Customers: An Ensemble Data Mining Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kin Keung Lai, Lean Yu, Shouyang Wang, and Wei Huang

486

The Characteristic Analysis of Web User Clusters Based on Frequent Browsing Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhiwang Zhang and Yong Shi

490

A Two-Phase Model Based on SVM and Conjoint Analysis for Credit Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kin Keung Lai, Ligang Zhou, and Lean Yu

494

A New Multi-Criteria Quadratic-Programming Linear Classification Model for VIP E-Mail Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peng Zhang, Juliang Zhang, and Yong Shi

499

Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nargess Memarsadeghi and David M. Mount

503

Development of an Efficient Conversion System for GML Documents . . . Dong-Suk Hong, Hong-Koo Kang, Dong-Oh Kim, and Ki-Joon Han

511

Effective Spatial Characterization System Using Density-Based Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chan-Min Ahn, Jae-Hyun You, Ju-Hong Lee, and Deok-Hwan Kim

515

MTF Measurement Based on Interactive Live-Wire Edge Extraction . . . . Peng Liu, Dingsheng Liu, and Fang Huang

523

Table of Contents – Part II

Research on Technologies of Spatial Configuration Information Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haibin Sun Modelbase System in Remote Sensing Information Analysis and Service Grid Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yong Xue, Lei Zheng, Ying Luo, Jianping Guo, Wei Wan, Wei Wei, and Ying Wang

XXIX

531

538

Density Based Fuzzy Membership Functions in the Context of Geocomputation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Victor Lobo, Fernando Ba¸ca ˜o, and Miguel Loureiro

542

A New Method to Model Neighborhood Interaction in Cellular Automata-Based Urban Geosimulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yaolong Zhao and Yuji Murayama

550

Artificial Neural Networks Application to Calculate Parameter Values in the Magnetotelluric Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andrzej Bielecki, Tomasz Danek, Janusz Jagodzi´ nski, and Marek Wojdyla Integrating Ajax into GIS Web Services for Performance Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seung-Jun Cha, Yun-Young Hwang, Yoon-Seop Chang, Kyung-Ok Kim, and Kyu-Chul Lee Aerosol Optical Thickness Retrieval over Land from MODIS Data on Remote Sensing Information Service Grid Node . . . . . . . . . . . . . . . . . . . . . . Jianping Guo, Yong Xue, Ying Wang, Yincui Hu, Jianqin Wang, Ying Luo, Shaobo Zhong, Wei Wan, Lei Zheng, and Guoyin Cai

558

562

569

Universal Execution of Parallel Processes: Penetrating NATs over the Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Insoon Jo, Hyuck Han, Heon Y. Yeom, and Ohkyoung Kwon

577

Parallelization of C# Programs Through Annotations . . . . . . . . . . . . . . . . Cristian Dittamo, Antonio Cisternino, and Marco Danelutto

585

Fine Grain Distributed Implementation of a Dataflow Language with Provable Performances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thierry Gautier, Jean-Louis Roch, and Fr´ed´eric Wagner

593

Efficient Parallel Tree Reductions on Distributed Memory Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kazuhiko Kakehi, Kiminori Matsuzaki, and Kento Emoto

601

Efficient Implementation of Tree Accumulations on Distributed-Memory Parallel Computers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kiminori Matsuzaki

609

XXX

Table of Contents – Part II

SymGrid-Par: Designing a Framework for Executing Computational Algebra Systems on Computational Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . Abdallah Al Zain, Kevin Hammond, Phil Trinder, Steve Linton, Hans-Wolfgang Loidl, and Marco Costanti

617

Directed Network Representation of Discrete Dynamical Maps . . . . . . . . . Fragiskos Kyriakopoulos and Stefan Thurner

625

Dynamical Patterns in Scalefree Trees of Coupled 2D Chaotic Maps . . . . Zoran Levnaji´c and Bosiljka Tadi´c

633

Simulation of the Electron Tunneling Paths in Networks of Nano-particle Films . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ˇ Milovan Suvakov and Bosiljka Tadi´c

641

Classification of Networks Using Network Functions . . . . . . . . . . . . . . . . . . Makoto Uchida and Susumu Shirayama

649

Effective Algorithm for Detecting Community Structure in Complex Networks Based on GA and Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xin Liu, Deyi Li, Shuliang Wang, and Zhiwei Tao

657

Mixed Key Management Using Hamming Distance for Mobile Ad-Hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seok-Lae Lee, In-Kyung Jeun, and Joo-Seok Song

665

An Integrated Approach for QoS-Aware Multicast Tree Maintenance . . . Wu-Hong Tsai and Yuan-Sun Chu

673

A Categorial Context with Default Reasoning Approach to Heterogeneous Ontology Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ruliang Xiao and Shengqun Tang

681

An Interval Lattice Model for Grid Resource Searching . . . . . . . . . . . . . . . Wen Zhou, Zongtian Liu, and Yan Zhao

689

Topic Maps Matching Computation Based on Composite Matchers . . . . . Jungmin Kim and Hyunsook Chung

696

Social Mediation for Collective Intelligence in a Large Multi-agent Communities: A Case Study of AnnotGrid . . . . . . . . . . . . . . . . . . . . . . . . . . Jason J. Jung and Geun-Sik Jo

704

Metadata Management in S-OGSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oscar Corcho, Pinar Alper, Paolo Missier, Sean Bechhofer, Carole Goble, and Wei Xing Access Control Model Based on RDB Security Policy for OWL Ontology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dongwon Jeong, Yixin Jing, and Doo-Kwon Baik

712

720

Table of Contents – Part II

Semantic Fusion for Query Processing in Grid Environment . . . . . . . . . . . Jinguang Gu SOF: A Slight Ontology Framework Based on Meta-modeling for Change Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Li Na Fang, Sheng Qun Tang, Ru Liang Xiao, Ling Li, You Wei Xu, Yang Xu, Xin Guo Deng, and Wei Qing Chen Data Forest: A Collaborative Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ronan Jamieson, Adrian Haffegee, Priscilla Ramsamy, and Vassil Alexandrov Net’O’Drom– An Example for the Development of Networked Immersive VR Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christoph Anthes, Alexander Wilhelm, Roland Landertshamer, Helmut Bressler, and Jens Volkert

XXXI

728

736

744

752

Intelligent Assembly/Disassembly System with a Haptic Device for Aircraft Parts Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christiand and Jungwon Yoon

760

Generic Control Interface for Networked Haptic Virtual Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Priscilla Ramsamy, Adrian Haffegee, and Vassil Alexandrov

768

Physically-Based Interaction for Networked Virtual Environments . . . . . . Christoph Anthes, Roland Landertshamer, and Jens Volkert

776

Middleware in Modern High Performance Computing System Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christian Engelmann, Hong Ong, and Stephen L. Scott

784

Usability Evaluation in Task Orientated Collaborative Environments . . . Florian Urmetzer and Vassil Alexandrov

792

Developing Motivating Collaborative Learning Through Participatory Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gustavo Zurita, Nelson Baloian, Felipe Baytelman, and Antonio Farias

799

A Novel Secure Interoperation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Li Jin and Zhengding Lu

808

Scalability Analysis of the SPEC OpenMP Benchmarks on Large-Scale Shared Memory Multiprocessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Karl F¨ urlinger, Michael Gerndt, and Jack Dongarra

815

Analysis of Linux Scheduling with VAMPIR . . . . . . . . . . . . . . . . . . . . . . . . . Michael Kluge and Wolfgang E. Nagel

823

XXXII

Table of Contents – Part II

An Interactive Graphical Environment for Code Optimization . . . . . . . . . Jie Tao, Thomas Dressler, and Wolfgang Karl

831

Memory Allocation Tracing with VampirTrace . . . . . . . . . . . . . . . . . . . . . . . Matthias Jurenz, Ronny Brendel, Andreas Kn¨ upfer, Matthias M¨ uller, and Wolfgang E. Nagel

839

Automatic Memory Access Analysis with Periscope . . . . . . . . . . . . . . . . . . Michael Gerndt and Edmond Kereku

847

A Regressive Problem Solver That Uses Knowledgelet . . . . . . . . . . . . . . . . Kuodi Jian

855

Resource Management in a Multi-agent System by Means of Reinforcement Learning and Supervised Rule Learning . . . . . . . . . . . . . . . ´ zy´ Bartlomiej Snie˙ nski

864

Learning in Cooperating Agents Environment as a Method of Solving Transport Problems and Limiting the Effects of Crisis Situations . . . . . . . Jaroslaw Ko´zlak

872

Distributed Adaptive Design with Hierarchical Autonomous Graph Transformation Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Leszek Kotulski and Barbara Strug

880

Integration of Biological, Psychological, and Social Aspects in Agent-Based Simulation of a Violent Psychopath . . . . . . . . . . . . . . . . . . . . . Tibor Bosse, Charlotte Gerritsen, and Jan Treur

888

A Rich Servants Service Model for Pervasive Computing . . . . . . . . . . . . . . Huai-dong Shi, Ming Cai, Jin-xiang Dong, and Peng Liu

896

Techniques for Maintaining Population Diversity in Classical and Agent-Based Multi-objective Evolutionary Algorithms . . . . . . . . . . . . . . . . Rafal Dre˙zewski and Leszek Siwik

904

Agents Based Hierarchical Parallelization of Complex Algorithms on the Example of hp Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . M. Paszy´ nski

912

Sexual Selection Mechanism for Agent-Based Evolutionary Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rafal Dre˙zewski and Krzysztof Cetnarowicz

920

Agent-Based Evolutionary and Immunological Optimization . . . . . . . . . . . Aleksander Byrski and Marek Kisiel-Dorohinicki

928

Strategy Description for Mobile Embedded Control Systems Exploiting the Multi-agent Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vil´em Srovnal, Bohumil Hor´ ak, V´ aclav Sn´ aˇsel, Jan Martinoviˇc, Pavel Kr¨ omer, and Jan Platoˇs

936

Table of Contents – Part II

XXXIII

Agent-Based Modeling of Supply Chains in Critical Situations . . . . . . . . . Jaroslaw Ko´zlak, Grzegorz Dobrowolski, and Edward Nawarecki

944

Web-Based Integrated Service Discovery Using Agent Platform for Pervasive Computing Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kyu Min Lee, Dong-Uk Kim, Kee-Hyun Choi, and Dong-Ryeol Shin

952

A Novel Modeling Method for Cooperative Multi-robot Systems Using Fuzzy Timed Agent Based Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hua Xu and Peifa Jia

956

Performance Evaluation of Fuzzy Ant Based Routing Method for Connectionless Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seyed Javad Mirabedini and Mohammad Teshnehlab

960

Service Agent-Based Resource Management Using Virtualization for Computational Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sung Ho Jang and Jong Sik Lee

966

Fuzzy-Aided Syntactic Scene Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marzena Bielecka and Marek Skomorowski

970

Agent Based Load Balancing Middleware for Service-Oriented Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jun Wang, Yi Ren, Di Zheng, and Quan-Yuan Wu

974

A Transformer Condition Assessment System Based on Data Warehouse and Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xueyu Li, Lizeng Wu, Jinsha Yuan, and Yinghui Kong

978

Shannon Wavelet Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Carlo Cattani

982

Wavelet Analysis of Bifurcation in a Competition Model . . . . . . . . . . . . . . Carlo Cattani and Ivana Bochicchio

990

Evolution of a Spherical Universe in a Short Range Collapse/Generation Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ivana Bochicchio and Ettore Laserra

997

On the Differentiable Structure of Meyer Wavelets . . . . . . . . . . . . . . . . . . . 1004 Carlo Cattani and Luis M. S´ anchez Ruiz Towards Describing Multi-fractality of Traffic Using Local Hurst Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012 Ming Li, S.C. Lim, Bai-Jiong Hu, and Huamin Feng A Further Characterization on the Sampling Theorem for Wavelet Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1021 Xiuzhen Li and Deyun Yang

XXXIV

Table of Contents – Part II

Characterization on Irregular Tight Wavelet Frames with Matrix Dilations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029 Deyun Yang, Zhengliang Huan, Zhanjie Song, and Hongxiang Yang Feature Extraction of Seal Imprint Based on the Double-Density Dual-Tree DWT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037 Li Runwu, Fang Zhijun, Wang Shengqian, and Yang Shouyuan Vanishing Waves on Semi-closed Space Intervals and Applications in Mathematical Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045 Ghiocel Toma Modelling Short Range Alternating Transitions by Alternating Practical Test Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053 Stefan Pusca Different Structural Patterns Created by Short Range Variations of Internal Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1060 Flavia Doboga Dynamic Error of Heat Measurement in Transient . . . . . . . . . . . . . . . . . . . . 1067 Fang Lide, Li Jinhai, Cao Suosheng, Zhu Yan, and Kong Xiangjie Truncation Error Estimate on Random Signals by Local Average . . . . . . . 1075 Gaiyun He, Zhanjie Song, Deyun Yang, and Jianhua Zhu A Numerical Solutions Based on the Quasi-wavelet Analysis . . . . . . . . . . . 1083 Z.H. Huang, L. Xia, and X.P. He Plant Simulation Based on Fusion of L-System and IFS . . . . . . . . . . . . . . . 1091 Jinshu Han A System Behavior Analysis Technique with Visualization of a Customer’s Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1099 Shoichi Morimoto Research on Dynamic Updating of Grid Service . . . . . . . . . . . . . . . . . . . . . . 1107 Jiankun Wu, Linpeng Huang, and Dejun Wang Software Product Line Oriented Feature Map . . . . . . . . . . . . . . . . . . . . . . . . 1115 Yiyuan Li, Jianwei Yin, Dongcai Shi, Ying Li, and Jinxiang Dong Design and Development of Software Configuration Management Tool to Support Process Performance Monitoring and Analysis . . . . . . . . . . . . . 1123 Alan Cline, Eun-Pyo Lee, and Byong-Gul Lee Data Dependency Based Recovery Approaches in Survival Database Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1131 Jiping Zheng, Xiaolin Qin, and Jin Sun

Table of Contents – Part II

XXXV

Usage-Centered Interface Design for Quality Improvement . . . . . . . . . . . . . 1139 Chang-Mog Lee, Ok-Bae Chang, and Samuel Sangkon Lee Description Logic Representation for Requirement Specification . . . . . . . . 1147 Yingzhou Zhang and Weifeng Zhang Ontologies and Software Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155 Waralak V. Siricharoen Epistemological and Ontological Representation in Software Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1162 J. Cuadrado-Gallego, D. Rodr´ıguez, M. Garre, and R. Rejas Exploiting Morpho-syntactic Features for Verb Sense Distinction in KorLex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1170 Eunryoung Lee, Ae-sun Yoon, and Hyuk-Chul Kwon Chinese Ancient-Modern Sentence Alignment . . . . . . . . . . . . . . . . . . . . . . . . 1178 Zhun Lin and Xiaojie Wang A Language Modeling Approach to Sentiment Analysis . . . . . . . . . . . . . . . 1186 Yi Hu, Ruzhan Lu, Xuening Li, Yuquan Chen, and Jianyong Duan Processing the Mixed Properties of Light Verb Constructions . . . . . . . . . . 1194 Jong-Bok Kim and Kyung-Sup Lim Concept-Based Question Analysis for an Efficient Document Ranking . . . 1202 Seung-Eun Shin, Young-Min Ahn, and Young-Hoon Seo Learning Classifier System Approach to Natural Language Grammar Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210 Olgierd Unold Text Retrieval Oriented Auto-construction of Conceptual Relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214 Yi Hu, Ruzhan Lu, Yuquan Chen, and Bingzhen Pei Filtering Methods for Feature Selection in Web-Document Clustering . . . 1218 Heum Park and Hyuk-Chul Kwon A Korean Part-of-Speech Tagging System Using Resolution Rules for Individual Ambiguous Word . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222 Young-Min Ahn, Seung-Eun Shin, Hee-Geun Park, Hyungsuk Ji, and Young-Hoon Seo An Interactive User Interface for Text Display . . . . . . . . . . . . . . . . . . . . . . . 1226 Hyungsuk Ji and Hyunseung Choo Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1231

Resolving Occlusion Method of Virtual Object in Simulation Using Snake and Picking Algorithm JeongHee Cha, GyeYoung Kim, and HyungIl Choi Information and media institute, School of Computing, School of Media, Soongsil University , Sangdo 5 Dong , DongJak Gu, Seoul, Korea [email protected], {gykim1,hic}@ssu.ac.kr

Abstract. For realistic simulation, it is essential to register the two worlds, calculate the occlusion realm between the real world and the virtual object, and determine the location of the virtual object based on the calculation. However, if the constructed map is not accurate or the density is not sufficient to estimate the occlusion boundary, it is very difficult to determine the occlusion realm. In order to solve this problem, this paper proposes a new method for calculating the occlusion realm using the snake and picking algorithm. First, the wireframe generated by the CCD image and DEM was mapped using the visual clues to acquire 3D information in the experimental realm, and the 3D information was calculated at the point where occlusion problem for a moving target. The validity of the proposed approach under the environment in which partial occlusion occurs has been provided by an experiment. Keywords: Occlusion, Snake, Picking, DEM, Augmented Reality, Simulation.

1 Introduction Augmented reality is an area of technology that has originated in virtual reality. While virtual reality offers a virtual world in which users are completely immersed, augmented reality offers virtual objects on the basis of real world images. At present, augmented reality technology is being researched and applied to various areas including the military, medicine, education, construction, game, and broadcasting. This paper studied on the development of a realistic simulated training model through the display of virtual targets in the input images of CCD camera mounted in a tank and the determination of occlusion areas generated from the creation and movement of virtual objects through a movement path according to a scenario. Augmented reality has three general characteristics: image registration, interaction, and real time[1]. Image registration refers to the matching of the locations of the real world object that users watch and the related virtual object, real time refers to the real time image registration and interaction. Interaction implies that the combination of virtual objects and the objects in real images must be harmonized with surrounding environment in a realistic manner, and refers to the determination of occlusion areas according to the changed location or line of sight of the observer or the re-rendering of virtual objects after detection of collisions. However, to solve the problems of Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1 – 8, 2007. © Springer-Verlag Berlin Heidelberg 2007

2

J. Cha, G. Kim, and H. Choi

occlusion such as the hiding of farther virtual objects by closer objects and the covering of objects in real images by other objects, the two worlds must be accurately coordinated and then the depth of the actual scene must be compared with the depth of virtual objects[2][3]. But if the accuracy or density of the created map is insufficient to estimate the boundary of occlusion area, it is difficult to determine the occlusion area. To solve this problem, first, we created a 3D wireframe using the DEM of the experiment area and then coordinate this using CCD camera images and visual clues. Second, to solve the problem of occlusion by accurately estimating the boundary regardless of the density of map, this paper also proposed a method to obtain the reference 3D information of the occlusion points using the Snake algorithm and the Picking algorithm and then to infer the 3D information of other boundaries using the proportional relations between 2D and 3D DEMs. Third, for improving processing speed, we suggest a method by comparing the MER(Minimum Enclosing Rectangle) area of the object in the camera’s angle of vision and the MER of the virtual target. Fig. 1 shows the proposed system framework.

Fig. 1. Proposed System Framework

2 Methodology 2.1 Formation of Wireframe Using DEM and Registration with Real Images Using Visual Clues The topographical information DEM (Digital Elevation Model) is used to map the real world coordinates to each point of the 2D CCD image. DEM has information on the latitude and longitude coordinates expressed in X and Y and heights in fixed interval. The DEM used for this experiment is a grid-type DEM which had been produced to have the height information for 2D coordinates in 1M interval for the limited experiment area of 300 m x 300 m. The DEM data are read to create a mesh

Resolving Occlusion Method of Virtual Object in Simulation

3

with the vertexes of each rectangle and a wireframe with 3D depth information as Fig. 2[4][5]. This is overlaid on the sensor image to check the coordination, and visual clues are used to move the image to up, down, left or right as shown in Fig. 3, thus reducing error. Based on this initial coordinated location, the location changes by movement of vehicles were frequently updated using GPS (Global Positioning System) and INS (Inertial Navigation System).

Fig. 2. Wireframe Creation using DEM

Fig. 3. Registration of Two Worlds using Visual Clues

2.2 Extracting the Outline of Objects and Acquiring 3D Information The Snake algorithm[6][7] is a method of finding the outline of an object by repeatedly moving to the direction of minimizing energy function from the snake vertex input by user. The energy function is shown in Expression [1]. As the energy function is calculated for a discrete space, the parameters of each energy function become the coordinates of each vertex in the image. In Expression [1], v (s ) is the

()

snake point, and v( s ) = ( x ( s ), y ( s )) , where x s and y (s ) refer to the positions of x and y in the image of the snake point. Also, α , β and γ are weights, and this

paper gave α = 1 , β = 0.4 , and 1

γ = 2.0 , respectively.

Esnake = ∫ (αEcont (v( s )) + βEcurve (v( s )) + γEimage (v( s )))ds 0

(1)

4

J. Cha, G. Kim, and H. Choi

The first term is the energy function that represents the continuity of the snake vertexes surrounding the occlusion area and the second term is the energy function that controls the smoothness of the curve forming the snake, of which the value increases along with the curvature, enabling the detection of corner points. Lastly, Eimage is a image feature function. All energy functions are normalized to have a value between 1 and 0. As shown in Table 1, this algorithm extracts the outline by repeatedly performing the energy minimization algorithm which sets a 3 pixels x3 pixels window at each vertex v(i ) , finds positions where energy is minimized in consideration of the continuity between previous and next vertexes, curvature, and edge strength, and then moves the vertexes to the positions. Table 1. Snake Algorithm

2.3 Acquisition of 3D Information Using the Picking Algorithm In order to acquire the 3D information of the extracted vertexes, this paper used the Picking algorithm which is a well-known 3D graphics technique[8]. It finds the collision point with the 3D wireframe created by DEM that corresponds to the points in 2D image and provides the 3D information of the points. The picking search point is the lowest point of the vertexes of the objects extracted from the 2D image. The screen coordinate system that is a rectangular area indicating a figure that has been projection transformed in the 3D image rendering process must be converted to the viewport coordinate system in which the actual 3D topography exists to pick the coordinate system where the mouse is actually present. First, the conversion matrix to convert viewport to screen is used to obtain the conversion formula from 2D screen to 3D projection window, and then the ray of light is lengthened gradually from the projection window to the ground surface to obtain the collision point between the point to search and the ground surface. Fig. 4 is an example of picking the collision point between the ray of light and DEM. The lowest point of the occlusion area indicated by an arrow is the reference point to search, and this becomes the actual position value of 2D image in a 3D space.

Resolving Occlusion Method of Virtual Object in Simulation

5

(a)occlusion candidate (b)matching ref.point and DEM (c)3D information extraction Fig. 4. 3D information Extraction using Collision point of Ray and DEM

2.4 Creation of 3D Information Using Proportional Relational Expression The collision point, or reference point, has 3D coordinates in DEM, but other vertexes of the snake indicated as object outline cannot obtain 3D coordinates because they don’t have a collision point. Therefore, this paper suggested obtaining a proportional relation between 2D image and 3D DEM using the collision reference point and then obtaining the 3D coordinates of another vertex. Fig. 5 shows the proportional relation between 2D and 3D vertexes. In Fig. 5, S m is center of screen, S B is reference point of snake vertex (lowest point), point,

ΔS B = (ΔS xB , ΔS yB ) , S k is a point except reference

ΔS k = (ΔS xk , ΔS yk ) . Pm is projection point of straight line of PB in 3D,

which is through the center of screen.

ΔPB = (ΔPxB , ΔPy B , ΔPzB ),

Pk

is

PB is 3D correspondence point of S B , a

point

except

reference

point,

ΔPk = (ΔPxk , ΔPyk , ΔPzk ) , t = Po PB , t m = Po Pm , θ B : ∠tt , φ B : ∠t t m . t ' is projected vector of t to xz plane. '

'

Fig. 5. Proportional Relation of the Vertex in 2D and 3D

pm that passes the center of the screen using the coordinates of the ' reference point obtained above, t must be obtained first. As the t value is given by To get

6

J. Cha, G. Kim, and H. Choi

the picking ray, the given this

θB

t value and y B are used to get θ B and t ' is obtained using

in Expression (2).

θ B = sin −1 ( To get tm,

ΔPyB t

), t ' = t B cos (θ B )

t' = t'

(2)

t m , φB is obtained from Expression (2) which is the angle between t’ and

t m can be obtained using φB from Expression (3).

φB = tan −1 ( Because t m

ΔPxB t'

), t ' = t m cos (φB ) , t m =

t' cos (φB )

tm = t m

(3)

= p zm , Pm = (0,0, t m ) .

Now, we can present the relation between the 2D screen view in Fig. 5 and the 3D space coordinates, and this can be used to get pk , which corresponds to the 2D snake vertex.

ΔS B : ΔPB = ΔS k : ΔPk , ΔS xB : ΔPxB = ΔS xk : ΔPxk , ΔPxk =

ΔPxB × ΔS xk ΔS xB

, ΔS yB

Consequently, we can get

: ΔPyB ΔS yk : ΔPyk , ΔPyk =

ΔPyB × ΔS yk

(4)

ΔS yB

ΔPk = (ΔPxk , ΔPyk ) , which is the 3D space point

corresponding to each snake vertex to search. 2.5 Creation of Virtual Target Path and Selection of Candidate Occlusion Objects Using MER (Minimum Enclosing Rectangle) To test the proposed occlusion-resolving algorithm, we created the movement path of a virtual target, and determined the changes of the direction and shape of the target as well as the 3D position of the target. First, the beginning and end points of the target set by instructor were saved and the angle of these two points was calculated, and the direction and shape of the target were updated in accordance with the change of the angle. Further, the remaining distance was calculated using the speed and time of the target, and the 3D coordinates corresponding to the position after movements were determined. We also suggest a method of improving processing speed by comparing the MER (Minimum Enclosing Rectangle) area of the object in the camera’s angle of vision and the MER of the virtual target because the relational operations between all objects extracted from the image for occlusion processing and the virtual target take much time. The MER (Minimum Enclosing Rectangle) of an object refers to the

Resolving Occlusion Method of Virtual Object in Simulation

7

minimum rectangle that can enclose the object and determines the object that has an overlapping area by comparing the objects in the camera image and the MER of the virtual target. In addition, the distance between object and virtual target is obtained using the fact that the determined object and virtual target are placed more or less in a straight line from the camera, and this value was used to determine whether there exists an object between the virtual target and the camera.

3 Experimental Results Fig. 6(left) shows movement path of the virtual target which trainee sets. Also, (right) shows the various virtual targets created to display the targets changing with movement on the image.

Fig. 6.

Moving Route Creation(left) and Appearance of Virtual Object as it Moved(right)

Fig. 7 shows the virtual images moving along the path by frame. We can see that as the frames increase, it is occluded between the tank and the object.

Fig. 7. Experimental Results of Moving and Occlusion

Table 2 compares between the case of using snake vertexes to select objects in the image to compare with virtual targets and the case of using the proposed MER. With the proposed method, the processing speed decreased by 1.671, which contributed to performance improvement.

8

J. Cha, G. Kim, and H. Choi Table 2. Speed Comparison

Method Snake vertexes MER(proposed)

Total frame 301 301

Used object 10 10

Speed(sec) 112 67…

Frame per sec. 2.687 4.492

4 Conclusions To efficiently solve the problem of occlusion that occurs when virtual targets are moved along the specified path over an actual image, we created 3D virtual world using DEM and coordinated this using camera images and visual clues. Moreover, the Snake algorithm and the Picking algorithm were used to extract an object that is close to the original shape to determine the 3D information of the point to be occluded. To increase the occlusion processing speed, this paper also used the method of using the 3D information of the MER area of the object, and proved the validity of the proposed method through experiment. In the future, more research is required on a more accurate extracting method for occlusion area that is robust against illumination as well as on the improvement of operation speed.

Acknowledgement This work was supported by the Korea Research Foundation Grant funded by the Korean Government(MOEHRD)(KRF-2006-005-J03801).

References [1] Bimber, O. and Raskar, R.,Spatial Augmented Reality: A Modern Approach to Augmented Reality, Siggraph 2005, Los Angeles USA [2] J. Yong Noh and U. Neumann. Expression cloning. In SIGGRAPH'01, pages 277-288, 2001. [3] E. Chen. Quicktime VR-an image-based approach to virtual environment navigation. Proc. of SIGGRAPH, 1995. [4] Lilian Ji, Hong Yan, "Attractable snakes based on the greedy algorithm for contour extraction", Pattern Recognition 35, pp.791-806 (2002) [5] Charles C. H. Lean, Alex K. B. See, S. Anandan Shanmugam, "An Enhanced Method for the Snake Algorithm," icicic, pp. 240-243, First International Conference on Innovative Computing, Information and Control - Volume I (ICICIC'06), 2006 [6] Wu, S.-T., Abrantes, M., Tost, D., and Batagelo, H. C. 2003. Picking and snapping for 3d input devices. In Proceedings of SIBGRAPI 2003, 140-147.

Graphics Hardware-Based Level-Set Method for Interactive Segmentation and Visualization Helen Hong1 and Seongjin Park2 1

Division of Multimedia Engineering, College of Information and Media, Seoul Women’s University, 126 Gongreung-dong, Nowon-gu, Seoul 139-774 Korea 2 School of Computer Science and Engineering, Seoul National University, San 56-1 Shinlim-dong, Kwanak-gu, Seoul 151-741 Korea [email protected], [email protected]

Abstract. This paper presents an efficient graphics hardware-based method to segment and visualize level-set surfaces as interactive rates. Our method is composed of memory manager, level-set solver, and volume renderer. The memory manager which performs in CPU generates page table, inverse page table and available page stack as well as process the activation and inactivation of pages. The level-set solver computes only voxels near the iso-surface. To run efficiently on GPUs, volume is decomposed into a set of small pages. Only those pages with non-zero derivatives are stored on GPU. These active pages are packed into a large 2D texture. The level-set partial differential equation (PDE) is computed directly on this packed format. The memory manager is used to help managing the packing of the active data. The volume renderer performs volume rendering of the original data simultaneouly with the evolving level set in GPU. Experimental results using two chest CT datasets show that our graphics hardware-based level-set method is much faster than software-based one. Keywords: Segmentation, Level-Set, Volume rendering, Graphics hardware, CT, Lung.

1 Introduction The level-set method is a numerical technique for tracking interfaces and shapes[1]. The advantage of the level-set method is that one can perform numerical computations involving curves and surfaces on a fixed Cartesian grid without having to parameterize these objects. In addition, the level-set method makes it easy to follow shapes which change topology. All these make the level-set method a great tool for modeling time-varying objects. Thus, deformable iso-surfaces modeled by level-set method have demonstrated a great potential in visualization for applications such as segmentation, surface processing, and surface reconstruction. However, the use of level sets in visualization has a limitation in their high computational cost and reliance on significant parameter tuning. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 9 – 16, 2007. © Springer-Verlag Berlin Heidelberg 2007

10

H. Hong and S. Park

Several methods have been suggested for accelerate the computation time. Adalsteinson and Sethian [2] have proposed the narrow band method, which only computes the points near the front at each time step. Thus it is more efficient than the standard level-set approach. However, the computational time is still large, especially when the image size is large. Paragios and Deriche introduced the Hermes algorithm which propagates in a small window each time to achieve a much faster computation. Sethian [3] presented a monotonically advancing scheme. It is restricted to a one directional speed term and the front’s geometric properties are omitted. Unfortunately, the stop criteria have to be decided carefully so that the front will not exceed the boundary. Whitaker[4] proposed the sparse-field method, which introduces a scheme in which updates are calculated only on the wavefront, and several layers around that wavefront are updated via a distance transform at each iteration. To overcome those limitations of software-based level-set methods, we propose an efficient graphics hardware-based method to segment and visualize level set surfaces as interactive rates.

2 Level-Set Method on Graphics Hardware Our method is composed of memory manager, level-set solver and volume renderer as shown in Figure 1. First, in order to help managing the packing of the active data, the memory manager generates page table, inverse page table and available page stack as well as process the activation and inactivation of pages. Second, level-set solver computes only voxels near the iso-surface like the sparse field level-set method. To run efficiently on GPUs, volume is decomposed into a set of small pages. Third, volume renderer performs volume rendering of the original data simultaneously with the evolving level set. 2.1 Memory Manager Generally, the size of texture memory in graphics hardware is rather small. Thus, there has a limitation to load a large volume medical dataset which has over 1000 slices with 512 x 512 image size to texture memory. In order to overcome this limitation, it is required to load level sets only near the iso-surface, which called active pages. In this section, we propose an efficient method to manage these active pages. Firstly, main memory in CPU and texture memory in GPU is divided into pages. Then data structure as shown in Fig. 2 is generated. In order to exchange the corresponding page number between main memory and texture memory, the page table which converts the page number of main memory to corresponding page number of texture memory and the inverse page table which converts the page number of texture memory to corresponding page number of main memory is generated respectively. In addition, the available page stack is generated to manage empty pages in texture memory.

Graphics Hardware-Based Level-Set Method for Interactive Segmentation

Fig. 1. The flow chart of our method on graphics hardware

Fig. 2. Data structure for memory management

11

12

H. Hong and S. Park

In level-set method, the page including front is changed as front is increased or decreased. To manage these pages, activation and inactivation is performed as shown in Fig. 3. The activation process is occurred when evolving front use the inactive page in texture memory. At this process, main memory asks new page of texture memory to available page stack. Then top page of available page stack is popped as shown in Fig. 3(a). The inactivation process is occurred when evolving front is departed from active pages of texture memory. As shown in Fig. 3(b), main memory asks the removal of active page to texture memory, and the removed active page is pushed to available page stack.

(a)

(b) Fig. 3. The process of page activation and inactivation (a) page activation process (b) page inactivation process

During level-set computation in GPU, partial differential equation is computed using information of current and neighbor pixels. In case of referring inside pixel of page in texture memory, PDE can be calculated without preprocessing. In case of referring boundary pixel of page, neighbor page should be referred to know information of neighbor pixel. However, it is difficult to know such information during PDE calculation in GPU. In the case, vertex buffer is created in CPU to save the location of current and neighbor pixels. For this, we define nine different cases as shown in Fig. 4. In 1st, 3rd, 5th, 7th vertices, two pages neighbor to them are referred to know and save the location of neighbor pixel to vertex buffer with the location of current pixel. In 2nd, 4th, 6th, 8th vertices, one page neighbor to them are referred. In 9th vertex, the

Graphics Hardware-Based Level-Set Method for Interactive Segmentation

13

location of current is saved to vertex buffer without referring of neighbor page. The location of neighbor pixel is calculated using page table and inverse page table like Eq. (1).

Fig. 4. Nine different cases for referring neighbor page

Taddr PageSize M num = InversePageTable(Tnum ) neighbor ( M num ) = M num + neighborOffset neighbor (Tnum ) = PageTable(neighbor ( M num ))

Tnum =

(1)

where Tnum is page number in texture memory, Taddress is page address in texture memory, M num is page number in main memory, PageSize is defined in 16 x 16. 2.2 Level-Set Solver The efficient solution of the level set PDEs relies on updating only those voxels that are on or near the iso-surface. The narrow band and sparse field methods achieve this by operating on sequences of heterogeneous operations. For instance, the sparse-field method keeps a linked list of active voxels on which the computation is performed. However, the maximum efficiency for computing on GPU is achieved when homogeneous operations are applied to each pixel. To apply different operations to each pixel in page has a burden in CPU-to-GPU message passing. To run efficiently on GPUs, our level-set solver applies heterogeneous operations to nine different cases divided in creation of vertex buffer. Fig. 5 shows that vertex buffer is transferred to GPU during vertex shading, which is divided into apex (1st, 3rd, 5th, 7th), edge (2nd, 4th, 6th, 8th) and the inner parts (9th). Sixteen vertex buffers which include the location of four different apex points for the apex case, eight ends points for the edge case, and four apex points for the inner case are transferred. Then level-set computations are achieved by using Eq. (2) and (3). D( I ) = ε − | I − T | φ = φ − | ∇φ | ⋅D( I )

(2) (3)

14

H. Hong and S. Park

where I is the intensity value of the image, D(I) is speed function, φ is level-set value, T and ε is the average intensity value and standard deviation of segmenting region, respectively.

Fig. 5. The process of efficient level-set operation in GPU

2.3 Volume Renderer The conventional software-based volume rendering techniques such as ray-casting and shear-warp factorization have a limitation to visualize level-set surfaces as interactive rate. Our volume renderer performs a texture-based volume rendering on graphics hardware of the original data simultaneously with the evolving level set. Firstly, updated level-set values in texture memory are saved to main memory through inverse page table. Then texture-based volume rendering is applied for visualizing original volume with level-set surfaces. For efficient memory use, we use only two channels for intensity value and level set value instead of using RGBA four channels. Then proxy geometry is generated using parallel projection. Finally, we map three-dimensional texture memory in GPU to the proxy geometry. The mapped slices onto the proxy geometry render using compositing modes that include maximum intensity projection.

3 Experimental Result All our implementation and tests were performed using a general computer equipped with an Intel Pentium 4, 2.4 GHz CPU and 1GB memory. The graphics hardware was ATI Radeon 9600 GPU with 256 MB of memory. The programs are written in the

Graphics Hardware-Based Level-Set Method for Interactive Segmentation

15

DirectX shader program language. Our method was applied to each unilateral lung of two chest CT datasets to evaluate its accuracy and processing time. The volume resolution of each unilateral lung is 512 x 256 x 128. For packing active pages, the size of 2D texture memory is 2048 x 2048. Fig. 6 and 7 show how our method segment accurately in two- and three-dimensionally. The segmented lung boundary is presented in red. In Fig. 7, original volume with level set surfaces is visualized by using maximum intensity projection.

Fig. 6. The results of segmentation using our graphics hardware-based level-set method

Fig. 7. The results of visualizing original volume with level-set surfaces

We have compared our technique with software-based level-set method under the same condition. Table 1 shows a comparison of the total processing time using the 2 different techniques. The total processing time includes times for performing page management and level-set computations. As shown in Table 1, our method is over 3.4 times faster than software-based level set method. In particular, our method for computing level set PDE is over 14 times faster than that of software-based level set method. Table 1. The comparison results of total processing time using two different techniques. (sec) Level set Total Page Average Dataset Method solver processing time manager AL 0.38 0.068 0.45 Proposed graphAR 0.37 0.066 0.44 ics hardware0.44 BL 0.38 0.073 0.45 based method 0.067 0.43 BR 0.36 AL 0.54 0.93 1.47 Software-based AR 0.55 0.94 1.49 1.48 method BL 0.55 0.94 1.49 BR 0.54 0.93 1.47 L: left lung, R: right lung

4 Conclusion We have developed a new tool for interactive segmentation and visualization of level set surfaces on graphics hardware. Our memory manager helps managing the packing

16

H. Hong and S. Park

of the active data. A dynamic, packed texture format allows the efficient processing of time-dependent, sparse GPU computations. While the GPU updates the level set, it renders the surface model directly from this packed texture format. Our method was over 3.4 times faster than software-based level set method. In particular, our method for computing level set PDE was over 14 times faster than that of software-based level set method. The average of total processing time of our method was 0.6 seconds. The computation time for memory management took almost times in total processing time. Experimental results show that our solution is much faster than previous optimized solutions based on software technique.

Acknowledgement This study is supported in part by Special Basic Research Program grant R01-2006000-11244-0 under the Korea Science and Engineering Foundation and in part by Seoul R&D Program.

References 1. Osher S, Sethian J.A., Front propagating with curvature dependant speed: algorithms based on Hamilton-Jacobi formulation, Journal of Computational Physics, Vol. 79 (1988) 12-49. 2. Adalsteinson D, Sethian J.A., A fast level set method for propagating interfaces, Journal of Computational Physics (1995) 269-277. 3. Sethian J.A., A fast matching level set method for monotonically advancing fronts, Proc. Natl. Acad. Sci. USA Vol. 93 (1996) 1591-1595. 4. Whitaker R., A level-set approach to 3D reconstruction from range data, International Journal of Computer Vision (1998) 203-231.

Parameterization of Quadrilateral Meshes Li Liu 1, CaiMing Zhang 1,2, and Frank Cheng3 1

2

School of Computer Science and Technology, Shandong University, Jinan, China Department of Computer Science and Technology, Shandong Economic University, Jinan, China 3 Department of Computer Science, College of Engineering, University of Kentucky, America [email protected]

Abstract. Low-distortion parameterization of 3D meshes is a fundamental problem in computer graphics. Several widely used approaches have been presented for triangular meshes. But no direct parameterization techniques are available for quadrilateral meshes yet. In this paper, we present a parameterization technique for non-closed quadrilateral meshes based on mesh simplification. The parameterization is done through a simplify-project-embed process, and minimizes both the local and global distortion of the quadrilateral meshes. The new algorithm is very suitable for computer graphics applications that require parameterization with low geometric distortion. Keywords: Parameterization, mesh simplification, Gaussian curvature, optimization.

1 Introduction Parameterization is an important problem in Computer Graphics and has applications in many areas, including texture mapping [1], scattered data and surface fitting [2], multi-resolution modeling [3], remeshing [4], morphing [5], etc. Due to its importance in mesh applications, the subject of mesh parameterization has been well studied. Parameterization of a polygonal mesh in 3D space is the process of constructing a one-to-one mapping between the given mesh and a suitable 2D domain. Two major paradigms used in mesh parameterization are energy functional minimization and the convex combination approach. Maillot proposed a method to minimize the norm of the Green-Lagrange deformation tensor based on elasticity theory [6]. The harmonic embedding used by Eck minimizes the metric dispersion instead of elasticity [3]. Lévy proposed an energy functional minimization method based on orthogonality and homogeneous spacing [7]. Non-deformation criterion is introduced in [8] with extrapolation capabilities. Floater [9] proposed shape-preserving parameterization, where the coefficients are determined by using conformal mapping and barycentric coordinates. The harmonic embedding [3,10] is also a special case of this approach, except that the coefficients may be negative. However, these techniques are developed mainly for triangular mesh parameterization. Parameterization of quadrilateral meshes, on the other hand, is Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 17–24, 2007. © Springer-Verlag Berlin Heidelberg 2007

18

L. Liu, C. Zhang , and F. Cheng

actually a more critical problem because quadrilateral meshes, with their good properties, are preferred in finite element analysis than triangular meshes. Parameterization techniques developed for triangle meshes are not suitable for quadrilateral meshes because of different connectivity structures. In this paper, we present a parameterization technique for non-closed quadrilateral meshes through a simplify-project-embed process. The algorithm has the following advantages:(1) the method provably produces good parameterization results for any non-closed quadrilateral mesh that can be mapped to the 2D plane; (2) the method minimizes the distortion of both angle and area caused by parameterization; (3) the solution does not place any restrictions on the boundary shape; (4) since the quadrilateral meshes are simplified, the method is fast and efficient. The remaining part of this paper is organized as follows. The new model and the algorithm are presented in detail in Section 2. Test results of the new algorithm are shown in Section 3. Concluding remarks are given in Section 4.

2 Parameterization Given a non-closed quadrilateral mesh, the parameterization process consists of four steps. The first step is to get a simplified version of the mesh by keeping the boundary and interior vertices with high Gaussian curvature, but deleting interior vertices with low Gaussian curvature. The second step is to map the simplified mesh onto a 2D domain through a global parameterization process. The third step is to embed the deleted interior vertices onto the 2D domain through a weighted discrete mapping. This mapping preserves angles and areas and, consequently, minimizes angle and area distortion. The last step is to perform an optimization process of the parameterization process to eliminate overlapping. Details of these steps are described in the subsequent sections. For a given vertex v in a quadrilateral mesh, the one-ring neighbouring vertices of the vertex v are the vertices that share a common face with v . A one-ring neighboring vertex of the vertex v is called an immediate neighboring vertex if this vertex shares a common edge with v . Otherwise, it is called a diagonally neighboring vertex. 2.1 Simplification Algorithm The computation process, as well as the distortion, may be too large if the entire quadrilateral mesh is projected onto the plane. To speed up the parameterization and minimize the distortion, we simplify the mesh structure by reducing the number of interior vertices but try to retain a good approximation of the original shape and appearance. The discrete curvature is one of the good criteria of simplification while preserving the shape of an original model. In spite of the extensive use of quadrilateral meshes in geometric modeling and computer graphics, there is no agreement on the most appropriate way to estimate geometric attributes such as curvature on discrete surfaces. By thinking of a

Parameterization of Quadrilateral Meshes

19

quadrilateral mesh as a piecewise linear approximation of an unknown smooth surface, we can try to estimate the curvature of a vertex using only the information that is given by the quadrilateral mesh itself, such as the edge and angles. The estimation does not have to be precise. To speed up the computation, we ignore the effect of diagonally neighboring vertices, and use only immediate neighboring vertices to estimate the Gaussian curvature of a vertex, as shown in Fig.1-(a). We define the integral Gaussian curvature K = K v with respect to the area S = S v attributed to v by



K = K = 2π − s

n

∑θ

i

.

(1)

i =1

where θ i is the angle between two successive edges. To derive the curvature from the integral values, we assume the curvature to be uniformly distributed around the vertex and simply normalized by the area

K=

K . S

(2)

where S is the sum of the areas of adjacent faces around the vertex v . Different ways of defining the area S result in different curvature values. We use the Voronoi area, which sums up the areas of vertex v ’s local Voronoi cells. To determine the areas of the local Voronoi cells restricted to a triangle, we distinguish obtuse and nonobtuse triangles as shown in Fig. 1. In the latter case they are given by

SA =

1 ( vi v k 8

2

cot(γ i ) + v i v j

2

cot(δ i )) .

(3)

For obtuse triangles,

SB =

1 vi v k 8

2

tan(γ i ), S C =

1 vi v j 8

2

tan(δ i ), S A = S − S B − S C .

(4)

A vertex deletion means the deletion of a vertex with low Gaussian curvature and the incident edges. During the simplification process, we can adjust the tolerance value to control the number of vertices reduced.

(a)

(b)

(c)

Fig. 1. Voronoi area. (a) Voronoi cells around a vertex; (b) Non-obtus angle; (c) Obtus angle.

20

L. Liu, C. Zhang , and F. Cheng

2.2 Global Parameterization

Parameterizing a polygonal mesh amounts to computing a correspondence between the 3D mesh and an isomorphic planar mesh through a piecewise linear mapping. For the simplified mesh M obtained in the first step, the goal here is to construct a 2

mapping between M and an isomorphic planar mesh U in R that best preserves the intrinsic characteristics of the mesh M . We denote by vi the 3D position of the ith vertex in the mesh M , and by ui the 2D position (parameterized value) of the corresponding vertex in the 2D mesh U . The simplified polygonal mesh M approximates the original quadrilateral mesh, but the angles and areas of M are different from the original mesh. We take the edges of the mesh M as springs and project vertices of the mesh onto the parameterization domain by minimizing the following edge-based energy function



1 1 2 {i , j}∈Edge v − v i j

2

r

ui − u j , r ≥ 0 .

(5)

where Edge is the edge set of the simplified mesh. The coefficients can be chosen in different ways by adjusting r . This global parameterization process is performed on a simplified mesh (with less vertices), so it is different from the global parameterization and the fixed-boundary parameterization of triangular meshes. 2.3 Local Parameterization

After the boundary and interior vertices with high Gaussian curvature are mapped onto a 2D plane, those vertices with low curvature, are embedded back onto the parametrization plane. This process has great impact on the result of the parametrization. Hence, it should preserve as many of the intrinsic qualities of a mesh as possible. We need to define what it means by intrinsic qualities for a discrete mesh. In the following, the minimal distortion means best preservation of these qualities. 2.3.1 Discrete Conformal Mapping Conformal parameterization preserves angular structure, and is intrinsic to the geometry and stable with respect to small deformations. To flatten a mesh onto a twodimensional plane so that it minimizes the relative distortion of the planar angles with respect to their counterparts in the 3D space, we introduce an angle-based energy function as follows

EA =



j∈N (i )

(cot

α ij 4

+ cot

β ij 4

) ui − u j

2

.

(6)

where N (i ) is the set of immediate one-ring neighbouring vertices, and α ij , β ij are

the left and opposite angles of vi , as shown in Fig. 2-(a). The coefficients in the

Parameterization of Quadrilateral Meshes

21

formula (6) are always positive, which reduces the overlapping in the 2D mesh. To minimize the discrete conformal energy, we get a discrete quadratic energy in the parameterization and it depends only on the angles of the original surface. 2.3.2 Discrete Authalic Mapping Authalic mapping preserves the area as much as possible. A quadrilateral mesh in 3D space usually is not flat, so we cannot get an exact area of each quadrilateral patch. To minimize the area distortion, we divide each quadrilateral patch into four triangular parts and preserve the areas of these triangles respectively. For instance, in Fig. 2-(b) the quadrilateral mesh vi v j v k v j +1 is divided into triangular meshes Δv i v j v j +1 ,

Δvi v j v k , Δvi vk v j +1 and Δv j vk v j +1 respectively. This changes the problem of quadrilateral area preserving into that of triangular area preserving. The mapping resulted from the energy minimization process has the property of preserving the area of each vertex's one-ring neighbourhood in the mesh, and can be written as follows

Ex =



j∈N (i )

(cot

γ ij 2

+ cot

vi − v j

2

δ ij 2

)

ui − u j

2

.

(7)

where γ ij , δ ij are corresponding angles of the edge (vi , v j ) as shown in Fig. 2-(c). The parameterization deriving from E x is easily obtained, and the way to solve this system is similar to that of the discrete conformal mapping, but the linear coefficients now are functions of local areas of the 3D mesh.

(a)

(b)

(c)

Fig. 2. Edge and angles. (a) Edge and opposite left angles in the conformal mapping; (b) Quadrilateral mesh divided into four triangles; (c) Edge and angles in the authalic mapping.

2.3.3 Weighted Discrete Parameterization Discrete conformal mapping can be seen as an angle preserving mapping which minimizes the angle distortion for the interior vertices. The resulting mapping will preserve the shape but not the area of the original mesh. Discrete authalic mapping is area preserving which minimizes the area distortion. Although the area of the original

22

L. Liu, C. Zhang , and F. Cheng

mesh would locally be preserved, the shape tends to be distorted since the mapping from 3D to 2D will in general generate twisted distortion. To minimize the distortion and get better parameterization results, we define linear combinations of the area and the angle distortions as the distortion measures. It turns out that the family of admissible, simple distortion measures is reduced to linear combinations of the two discrete distortion measures defined above. A general distortion measure can thus always be written as

E = qE A + (1 − q) E X .

(8)

where q is a real number between 0 and 1. By adjusting the scaling factor q , parameterizations appropriate for special applications can be obtained. 2.4 Mesh Optimization

The above parameterization process does not impose restriction, such as convexity, on the given quadrilateral mesh. Consequently, overlapping might occur in the projection process. To eliminate overlapping, we optimize the parameterization mesh by adjusting vertex location without changing the topology. Mesh optimization is a local iterative process. Each vertex is optimized for a new location in a number of iterations. q

Let ui be the q times iteration location of the parameterization value

ui . The

optimisation process to find the new location in iterations is the following formula n

u j q −1 − ui q −1

i =1

n

ui q = ui q −1λi ∑ (

n

) + λ2 ∑ ( i =1

uk q −1 − ui q −1 ),0 < λ1 + λ2 < 1 . (9) n

where u j , uk are the parameterization values of the immediate and diagonal neighbouring vertices respectively. It is found that vertex optimization in the order of "worst first" is very helpful. We define the priority of the vertex follows n

u j q −1 − ui q −1

i =1

n

σ = λi ∑ (

uk q −1 − ui q −1 ) + λ2 ∑ ( ). n i =1 n

(10)

The priority is simply computed based on shape metrics of each parameterization vertex. For a vertex with the worst quality, the highest priority is assigned. Through experiments, we find that more iterations are needed if vertices are not overlapped in an order of "first come first serve". Besides, we must point out that the optimization process is local and we only optimize overlapping vertices and its one-ring vertices, which will minimize the distortion and preserve the parameterization results better.

3 Examples To evaluate the visual quality of a parameterization we use the checkerboard texture shown in Fig. 3, where the effect of the scaling factor q in Eq. (8) can be found. In

Parameterization of Quadrilateral Meshes

23

fact, while q is equal to 0 or 1, the weighted discrete mapping is discrete conformal mapping and authalic mapping separately. We can find few parameterization methods of quadrilateral meshes, so the weighted discrete mapping is compared with discrete conformal mapping and authalic mapping of quadrilateral meshes with q = 0 and q = 1 in Eq. (8) separately. Fig. 3-(a) and (e) show the sampled quadrilateral meshes. Fig. 3-(b) and (f) show the models with a checkerboard texture map using discrete conformal mapping with q = 0 . Fig.3-(c) and (g) show the models with a checkerboard texture map using weighted discrete mapping with q = 0.5 . Fig. 3-(d) and (h) show the models with a checkerboard texture map using discrete authalic mapping with q = 1 . It is seen that the results using weighted discrete mapping is much better than the ones using discrete conformal mapping and discrete authalic mapping.

(a)

(e)

(b)

(f)

(c)

(g)

(d)

(h)

Fig. 3. Texture mapping. (a) and (e) Models; (b) and (f) Discrete conformal mapping , q=0; (c) and (g) Weighted discrete mapping , q=0.5; (d) and (h) Discrete Authalic mapping , q=1.

The results demonstrate that the medium value (about 0.5) can get smoother parameterization and minimal distortion energy of the parameterization. And the closer q to value 0 or 1, the larger the angle and area distortions are.

4 Conclusions A parameterization technique for quadrilateral meshes is based on mesh simplification and weighted discrete mapping is presented. Mesh simplification

24

L. Liu, C. Zhang , and F. Cheng

reduces computation, and the weighted discrete mapping minimizes angle and area distortion. The scaling factor q of the weighted discrete mapping provides users with the flexibility of getting appropriate parameterisations according to special applications, and establishes different smoothness and distortion. The major drawback in our current implementation is that the proposed approach may contain concave quadrangles in the planar embedding. It is difficult to make all of the planar quadrilateral meshes convex, even though we change the triangular meshes into quadrilateral meshes by deleting edges. In the future work, we will focus on using a better objective function to obtain better solutions and developing a good solver that can keep the convexity of the planar meshes.

References 1. Levy, B.: Constrained texture mapping for polygonal meshes. In: Fiume E, (ed.): Proceedings of Computer Graphics. ACM SIGGRAPH, New York (2001) 417-424 2. Alexa, M.: Merging polyhedron shapes with scattered features. The Visual Computer. 16 (2000): 26-37 3. Eck, M., DeRose, T., Duchamp, T., Hoppe, H., Lounsbery, M., Stuetzle, W.: Multiresolution analysis of arbitrary meshes. In: Mair, S.G., Cook, R.(eds.): Proceedings of Computer Graphics. ACM SIGGRAPH, Los Angeles (1995) 173-182 4. Alliez, P., Meyer, M., Desbrun, M.: Interactive geometry remeshing. In: Proceedings of Computer Graphics.ACM SIGGRAPH, San Antonio (2002) 347-354 5. Alexa, M.: Recent advances in mesh morphing. Computer Graphics Forum. 21(2002) 173-196 6. Maillot, J., Yahia, H., Verroust, A.: Interactive texture mapping. In: Proceedings of Computer Graphics, ACM SIGGRAPH, Anaheim (1993) 27-34 7. Levy, B., Mallet, J.: Non-distorted texture mapping for sheared triangulated meshes. In: Proceedings of Computer Graphics, ACM SIGGRAPH, Orlando (1998) 343-352 8. Jin, M., Wang, Y., Yau, S.T., Gu. X.: Optimal global conformal surface parameterization. In: Proceedings of Visualization, Austin (2004) 267-274 9. Floater, M.S.: Parameterization and smooth approximation of surface triangulations. Computer Aided Geometric Design.14(1997) 231-250 10. Lee, Y., Kim, H.S., Lee, S.: Mesh parameterization with a virtual boundary. Computer & Graphics. 26 (2006) 677-686

Pose Insensitive 3D Retrieval by Poisson Shape Histogram Pan Xiang1, Chen Qi Hua2, Fang Xin Gang1, and Zheng Bo Chuan3 1

Institute of Software, Zhejiang University of Technology Institute of Mechanical, Zhejiang University of Technology 3 College of Mathematics & Information , China West Normal University 1,2 310014, Zhejiang, 3637002, Nanchong, P.R. China [email protected] 2

Abstract. With the rapid increase of available 3D models, content-based 3D retrieval is attracting more and more research interests. Histogram is the most widely in constructing 3d shape descriptor. Most existing histogram based descriptors, however, will not remain invariant under rigid transform. In this paper, we proposed a new kind of descriptor called poisson shape histogram. The main advantage of the proposed descriptor is not sensitive for rigid transform. It can remain invariant under rotation as well. To extract poisson shape histogram, we first convert the given 3d model into voxel representation. Then, the poisson solver with dirichlet boundary condition is used to get shape signature for each voxel. Finally, the poisson shape histogram is constructed by shape signatures. Retrieving experiments for the shape benchmark database have proven that poisson shape histogram can achieve better performance than other similar histogram-based shape representations. Keywords: 3D shape matching, Pose-Insensitive, Poisson equation, Histogram.

1 Introduction Recent development in modeling and digitizing techniques has led to a rapid increase of 3D models. More and more 3D digital models can be accessed freely from Internet or from other resources. Users can save the design time by reusing existing 3D models. As a consequence, the concept has changed from “How do we generate 3D models?” to “How do we find them?”[1]. An urgent problem right now is how to help people find their desirable 3D models accurately and efficiently from the model databases or from the web. Content-based 3D retrieval aiming to retrieve 3D models by shape matching has become a hot research topic. In Content-based 3D retrieval, histogram based representation has been widely used for constructing shape features[2]. For histogram based representation, it needs to define shape signatures. The defined shape signature is the most important for histogram descriptor. It should be invariant to affine transformations such as translation, scaling, rotation and rigid transform. Some rotation invariant shape signatures, such as curvature, distance et al, have been used for content-based 3d retrieval. Those Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 25–32, 2007. © Springer-Verlag Berlin Heidelberg 2007

26

X. Pan et al.

shape signatures are independent of 3d shape rotation. However, little researches are focusing on extracting invariant shape signatures under rigid transform. Those existing rotation-invariant shape signatures are often sensitive to rigid transform. In this paper, we propose a new kind of shape signature called poisson shape measure. It can remain almost invariant under not only rotation transform, but also rigid transform. The proposed shape signature is based on poisson theory. As one of the most important PDE theory, it has been widely used for computer vision, computer graphics, analysis of anatomical structures and image processing[3-5]. However, it has not been used for defining 3d shape signature and then content based 3d retrieval. The process of constructing poisson shape histogram can be concluded as following: the given 3d model will be first converted into voxel representation. Then, the poisson solver with dirichlet boundary condition is used to get shape signature for each voxel. Finally, the poisson shape histogram is constructed by the shape signatures. The comparative study shows poisson shape histogram can achieve better retrieving performance than other similar histogram descriptors. The remainder of the paper is organized as follows: Section 2 provides a brief review of the related work. Section 3 discusses the poison equation and the related property. Section 4 discusses how to construct poisson shape histogram. Section 5 provides the experimental results for content-based 3D retrievals. Finally, Section 6 concludes the paper and recommends some future work.

2 Related Work Previous shape descriptors can be classified into two groups by their characteristics: namely structural representation and statistical representation. The method proposed in this paper belongs to statistical representation. This section mainly gives a brief review on statistical shape description for content-based 3D retrieval. For more details about structure descriptors and content-based 3D retrieval, please refer to some survey papers[6-8]. As for statistical representation, the most common approach is to compute geometry signatures of the given model first, such as normal, curvature, distance and so on. Then, the extracted shape signatures are used to construct histogram. Existing shape signatures for 3d shape retrieval can be grouped into two types: one is the rotation invariant shape signatures, and the other is not. For the latter, rotation normalization is performed prior to the extraction of shape signatures. Rotation variant shape signatures Extend Gaussian Image (EGI) defines shape feature by normal distribution over the sphere[9]. An extension version of EGI is the Complex Extend Gaussian Image (CEGI)[10], which combines distance and normal for shape descriptor. Shape histograms defined on shells and sectors around a model centroid is to capture point distribution[11]. Transform-based shape features can be seen as a post-process of the original shape signatures. It often can achieve better retrieving accuracy than the

Pose Insensitive 3D Retrieval by Poisson Shape Histogram

27

original shape signatures. Vranic et al perform spherical harmonics transform for point distribution of the given model[12]. While Chen et al considered the concept that two models are similar if they look similar from different view angles. Hence they extracted transform coefficients in 2D spaces instead of the 3D space[13]. Transform-based 3D retrievals often can achieve better retrieving performance than histogram-based methods. but are more computational costly. Rotation invariant shape signatures This kind of shape signature is robust again rotation transform. Shape distribution used some measures over the surfaces, such as distance, angle and area, to generate histograms[14]. The angle and distance distribution (AD) is to integrate normal information into distance distribution[15]. The generalized shape distributions is to combine local and global shape feature for 3d retrieval. Shape index defined by curvature is adopted as MPEG-7 3D shape descriptor[16]. Radius-Angle Histogram is to extract the angle between radius and normal for histogram[17]. The local diameter shape-function is to compute the distance from surface to medial axis[18]. It has the similar characteristic with the poisson measure proposed by this paper. The extraction of local diameter shape function, however, is very time-cost(It requires nearly 2 minutes in average for construing histogram).

3 Poisson Equation Poisson’s equation arises in gravitation and electrostatics, and is the fundamental of mathematical physics. Mathematically, Poisson’s equation is a second-order elliptic partial differential equation defined as:

ΔU = −1

(1)

where ΔU is the laplacian operation. The poisson equation is to assign every internal point a value. As for definition, the poisson equation is somewhat similar with distance transform. The distance transform will assign to every internal point a value that depends on the relative position of that point within the given shape, which reflects its minimal distance to the boundary. The poisson equation, however, has a huge difference with distance transform. The poisson is to place a set of particles at the point and let them move in a random walk until they hit the contour. It measures the mean time required for a particle to hit the boundaries. That’s to say, the poisson equation will consider each internal point affected one more boundary points, and will be more robust again distance transform. The poisson equation has the potential property in shape analysis. Here we show some of these properties. 1. Rotation invariant. Poisson equation is independent of the coordinate system over the entire domain (volume in 3D, and region in 2D). It makes the signature defined by poisson equation be robust again rotation. 2. Geometry structure related. The poisson equation is correlated to the geometry of the structure. This correlation gives a mathematical meaning to the shape structure.

28

X. Pan et al.

3. Rigid-transform invariant. Similar with geodesic distance, the poisson equation has a strong robustness over the rigid transform.

4 Poisson Shape Histogram and Matching Followed by the definition of poisson equation, this section will discuss how to construct poisson shape histogram and similarity calculation. The definition of poisson equation is to assign each internal point a value. Most 3D models, however, will use boundary representation, such as the mesh model. The given mesh model will be converted into 3d discrete grid(48×48×48) first. The voxelization algorithm used in this paper is based on Z-buffer[19]. The efficiency of this algorithm is independent of the object complexity, and can be implemented efficiently. The voxelization also make a process of scale normalization for the given model. Suppose the voxelization model can be represented by a finite voxel set Vi , i = 1,2, "" N , where N is total voxel count. The tacus package[20] is then used for poisson solver. After that, for each voxel Vi , we can get poisson shape signature, denoted by Pi . The construction of poisson shape histogram can be concluded as the following steps: 1) For the signature set Pi , i

= 1,2, " i, " N , compute its mean value μ and vari-

ance σ respectively.

2) For each Pi , perform Gaussian normalization by the following equation.

Pi = '

Pi − μ . 3σ

(2)

'

3) For normalized set Pi , construct histogram containing 20 bins, denoted by:

H = {H 1 , H 2 , " H i , " H 20 } For two histograms, we use L1-metric to measure their dissimilarity.

Dis1, 2 = ∑ H 1,i − H 2,i , i = 1,2, "" N .

(3)

where H1 and H2 denote poisson shape histogram for two models. The bigger value means two models are more dissimilar. Section 3 discusses the property of poisson equation, and it shows the poisson equation will be independent of rigid transform. Figure 1 gives poisson shape histogram for horses under different rigid transform. The poisson shape histogram remains almost invariant for different rigid transform(the small difference due to the voxelization error). As a comparison, the D2 shape distribution, however, appears to be huge difference for two models.

Pose Insensitive 3D Retrieval by Poisson Shape Histogram

29

Fig. 1. Histogram descriptors for above models(Upper: Horses under different rigid-transform. Lower: The left is the poisson shape histogram for the above two models, and the right is the D2 shape distribution as well. The difference between poisson shape histograms appear to be very minor. While the difference of the D2 shape distributions appears to be very obvious).

5 Experiment Experiments are carried out to test the retrieving performance of poisson shape histogram. All experiments are performed with the hardware Intel Pentium 1.86GHZ, 512M memory. The testing 3D models are Princeton Shape Benchmark database(PSB)[21]. It contains 1804 mesh models, and is classified into two groups. Each group contains 907 models as well. One is the training set, which is used to get best retrieving parameters. The other is the testing set for retrieving performance comparison of different shape descriptors. The benchmark also provides different evaluating criterions for retrieving precision. Here we use Precision-Recall curve to measure the retrieving accuracy, and the precision-recall measure has been widely used in information retrieval. We first show the time in constructing shape poisson histogram, and then retrieving accuracy comparison with similar histograms. As for content-based 3D retrieval, the feature extraction process should be performed quickly. This is very important, especially for practical applications. The costing time for building poisson shape histogram consists of the following steps: voxelization, poisson solver and histogram construction. The voxelization time is almost 0.07s for each model, and the histogram construction is near to 0s. Notice the time for poisson solver is related with the count of voxel. Table 1 shows the costing time for different voxel models. In average, the costing time for poisson shape histogram is about 0.6s. While for D2 shape distribution, the generating time is about 0.8s. Next, we will compare the retrieving performance of poisson shape histogram(PSH) with some other histogram based shape descriptors. They are 3D shape spectrum(3DS), and D2 distance(D2). Figure 2 givens the precision-recall curve for

30

X. Pan et al. Table 1. The costing time for poisson solver Voxel models

Poisson solver (s)

8624

1.1

6832

0.7

4500

0.4

2306

0.2

Fig. 2. The Precision-Recall curves for different histogram-based descriptors

Fig. 3. Some retrieving results(For each row, the left model is the query model, and other three models are the most similar with queried model. Notice the model under different rigid transform can be retrieved correctly).

Pose Insensitive 3D Retrieval by Poisson Shape Histogram

31

three kinds of shape descriptors. It shows the poisson shape histogram can achieve best retrieving precision. Some retrieving results are also shown in Figure 3. Notice those models under different rigid transform can be retrieved correctly.

6 Conclusion and Future Work This paper proposed a new kind of 3d shape descriptor called poisson shape histogram. It uses poisson equation as the main mathematical theory. The encouraging characteristic of poisson shape histogram is insensitive for rigid transform. It remains rotation invariant as well. The retrieving experiments have shown that the poisson shape histogram can achieve better retrieving precision than other similar histogrambased 3d shape descriptors. As a kind of histogram, the main drawback of poisson shape histogram can only capture global shape feature. It can not support partial matching. While for the definition of poisson equation, the poisson shape signature is only affected by local neighbors. It shows the poisson shape measure can represent local shape feature as well. As one of the future work, we will work for partial matching based on poisson equation. Acknowledgments. This work was supported by natural science foundation of Zhejiang Province(Grant No: Y106203, Y106329). It was also partially funded by the Education Office of Zhejiang Province(Grant No 20051419) and the Education Office of Sichuan Province(Grant No 2006B040).

References 1. T. Funkhouser, P. Min, and M. Kazhdan, A Search Engine for 3D Models. ACM Transactions on Graphics, (2003)(1): 83-105. 2. Ceyhun Burak Akgül, Bülent Sankur, Yücel Yemez, et al., A Framework for HistogramInduced 3D Descriptors. European Signal Processing Conference (2006). 3. L. Gorelick, M. Galun, and E. Sharon Shape representation and classification using the poisson equation. CVPR, (2004): 61-67. 4. Y. Yu, K. Zhou, and D. Xu, Mesh Editing with Poisson-Based Gradient Field Manipulation. ACM SIGGRAPH, (2005). 5. H. Haider, S. Bouix, and J. J. Levitt, Charaterizing the Shape of Anatomical Structures with Poisson's Equation. IEEE Transactions on Medical Imaging, (2006). 25(10): 1249-1257. 6. J. Tangelder and R. Veltkamp. A Survey of Content Based 3D Shape Retrieval Methods. in International Conference on Shape Modeling. (2004). 7. N. Iyer, Y. Kalyanaraman, and K. Lou. A reconfigurable 3D engineering shape search system Part I: shape representation. in CDROM Proc. of ASME 2003. (2003). Chicago. 8. Benjamin Bustos, Daniel Keim, Dietmar Saupe, et al., An experimental effectiveness comparison of methods for 3D similarity search. International Journal on Digital Libraries, (2005). 9. B. Horn, Extended Gaussian Images. Proceeding of the IEEE, (1984). 72(12): 1671-1686.

32

X. Pan et al.

10. S. Kang and K. Ikeuchi. Determining 3-D Object Pose Using The Complex Extended Gaussian Image. in International Conference on Computer Vision and Pattern Recognition. (1991). 11. M. Ankerst, G. Kastenmuller, H. P. Kriegel, et al. 3D Shape Histograms for Similarity Search and Classification in Spatial Databases. in International Symposium on Spatial Databases. (1999). 12. D. Vranic, 3D Model Retrieval, PH. D Thesis, . 2004, University of Leipzig. 13. D. Y. Chen, X. P. Tian, and Y. T. Shen, On Visual Similarity Based 3D Model Retrieval. Computer Graphics Forum (EUROGRAPHICS'03), (2003). 22(3): 223-232. 14. R. Osada, T. Funkhouser, B. Chazelle, et al. Matching 3D Models with Shape Distributions. in International Conference on Shape Modeling and Applications. (2001). 15. R. Ohbuchi, T. Minamitani, and T. Takei. Shape-Similarity Search of 3D Models by Using Enhanced Shape Functions. in Theory and Practice of Computer Graphics. (2003). 16. T. Zaharia and F. Preteux. 3D Shape-based Retrieval within the MPEG-7 Framework. in SPIE Conference on Nonlinear Image Processing and Pattern Analysis. (2001). 17. Xiang Pan, Yin Zhang, Sanyuan Zhang, et al., Radius-Normal Histogram and Hybrid Strategy for 3D Shape Retrieval. International Conference on Shape Modeling and Applications, (2005): 374-379. 18. Ran Gal, Ariel Shamir, and Daniel Cohen-Or, Pose Oblivious Shape Signature. IEEE Transactions of Visualization and Computer Graphics, (2005). 19. E. A. Karabassi, G. Papaioannou , and T. Theoharis, A Fast Depth-buffer-based Voxelization Algorithm. Journal of Graphics Tools, (1999). 4(4): 5-10. 20. S. Toledo, TAUCS: A Library of Sparse Linear Solvers. Tel-Aviv University, 2003. http://www.tau.ac.il/~stoledo/taucs. 21. P. Shilane, K. Michael, M. Patrick, et al. The Princeton Shape Benchmark. in International Conference on Shape Modeling. (2004).

Point-Sampled Surface Simulation Based on Mass-Spring System Zhixun Su1,2 , Xiaojie Zhou1 , Xiuping Liu1 , Fengshan Liu2 , and Xiquan Shi2 1

2

Department of Applied Mathematics, Dalian University of Technology, Dalian 116024, P.R. China [email protected], [email protected], [email protected] Applied Mathematics Research Center, Delaware State University, Dover, DE 19901, USA {fliu,xshi}@desu.edu

Abstract. In this paper, a physically based simulation model for pointsampled surface is proposed based on mass-spring system. First, a Delaunay based simplification algorithm is applied to the original pointsampled surface to produce the simplified point-sampled surface. Then the mass-spring system for the simplified point-sampled surface is constructed by using tangent planes to address the lack of connectivity information. Finally, the deformed point-sampled surface is obtained by transferring the deformation of the simplified point-sampled surface. Experiments on both open and closed point-sampled surfaces illustrate the validity of the proposed method.

1

Introduction

Point based techniques have gained increasing attention in computer graphics. The main reason for this is that the rapid development of 3D scanning devices has facilitated the acquisition of the point-sampled geometry. Since point-sampled objects do neither have to store nor to maintain globally consistent topological information, they are more flexible compared to triangle meshes for handling highly complex or dynamically changing shapes. In point based graphics, point based modeling is a popular field [1,4,9,13,15,21], in which physically based modeling of point-sampled objects is still a challenging area. Physically based modeling has been investigated extensively in the past two decades. Due to the simplicity and efficiency, mass-spring systems have been widely used to model soft objects in computer graphics, such as cloth simulation. We introduce mass-spring system to point-sampled surface simulation. A Delaunay based simplification algorithm is applied to the original point-sampled surface to produce the simplified point-sampled surface. By using the tangent plane and projection, mass-spring system is constructed locally for the simplified point-sampled surface. Then the deformed point-sampled surface is obtained by transferring the deformation of the simplified point-sampled surface. The remaining of the paper is organized as follows. Related work is introduced in Section 2. Section 3 explains the Delaunay based simplification algorithm. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 33–40, 2007. c Springer-Verlag Berlin Heidelberg 2007 

34

Z. Su et al.

Section 4 describes the simulation of the simplified point-sampled surfaced based on mass-spring system. Section 5 introduces the displacement transference to the original point-sampled surface. Some experiments results are shown in Section 6. A brief discussion and conclusion is presented in Section 7.

2

Related Work

Point-sampled surfaces often consist of thousands or even millions of points sampled from an underlying surface. Reducing the complexity of such data is one of the key processing techniques. Alexa et al [1] described the contribution value of a point by estimating its distance to the MLS surface defined by the other sample points, and the point with the smallest contribution will be removed repeatedly. Pauly et al [14] extended mesh simplification algorithms to point clouds, and presented the clustering, iterative, and particle simulation simplification algorithms. Moenning et al [12] devised a coarse-to-fine uniform or feature-sensitive simplification algorithm with user-controlled density guarantee. We present a projection based simplification algorithm, which is more suitable for the construction of mass-spring system. Point based surface representation and editing are popular fields in point based graphics. Alexa et al [1] presented the now standard MLS surface, in which the surface is defined as the stationary set of a projection operator. Later Shachar et al [4] proposed a robust moving least-squares fitting with sharp features for reconstructing a piecewise smooth surface from a potentially noisy point clouds. The displacement transference in our method is similar to moving least squares projection. Zwicker [21] presented Pointshop3D system for interactive editing of point-based surfaces. Pauly et al [15] introduced Boolean operations and free-form deformation of point-sampled geometry. Miao et al [10] proposed a detail- preserving local editing method for point-sampled geometry based on the combination of the normal geometric details and the position geometric details. Xiao et al [19,20] presented efficient filtering and morphing methods for point-sampled geometry. Since the pioneering work of Terzopoulos and his coworkers [18], significant research effort has been done in the area of physically based modeling for meshes [5,16]. Recently, Guo and Qin et al [2,6,7,8] proposed the framework of physically based morphing, animation and simulation system. M¨ uller et al [13] presented point based animation of elastic, plastic and melting objects based on continuum mechanics. Clarenz et al [3] proposed framework for processing point-based surfaces via PDEs. In this paper, mass-spring system is constructed directly for the simplified point-sampled surface. The idea of the present method is similar to [17]. They studied curve deformation, while we focus on point-sampled surface simulation.

3

Simplification of Point-Sampled Surface

The point-sampled surface consists of n points P = {pi ∈ R3 , i = 1, . . . , n} sampled from an underlying surface, either open or closed. Since the normal at any

Point-Sampled Surface Simulation Based on Mass-Spring System

35

point can be estimated by the eigenvector of the covariance matrix that corresponds to the smallest eigen value [14], without loss of generality, we can assume that the normal ni at point pi is known as input. Traditional simplification algorithms reserve more sample points in regions of high-frequency, whereas less sample points are used to express the regions of low-frequency, which is called adaptivity. However, adaptivity does not necessary bring good results for simulation. An example is shown in Fig. 1, 1a) shows the sine curve and the simplified curve, force F is applied on the middle of the simplified curve, 1b) shows that the simulation based on the simplified curve produce the wrong deformation. We present a Delaunay based simplification algorithm, which is suitable for simulation and convenient for the construction of mass-spring system.

a) Sine curve (solid line) and the simplified polyline(solid line)

b) Deformation of the simplified polyline under an applied force F

Fig. 1. The effect of simplification to the simulation

For pi ∈ P , the index set of its k-nearest points is denoted by Nik = {i1 , . . . , ik }. These points are projected onto the tangent plane at point pi (the plane passing through pi with normal ni ), and the corresponding projection points are denoted by q ji , j = 1, . . . , k. 2D Delaunay triangulation is implemented on the k + 1 projection points. There are two possible cases: 1) pi is not on the boundary of the surface, 2) pi is on the boundary of the surface, shown as Fig. 2. Suppose that there are m points {qji r , r = 1, . . . , m} which are connected with pi in the triangle mesh, the union of the triangles that contain pi is denoted by Ri whose diameter is di . In either case, if di is less than the user-defined threshold, pi will be removed. This process is repeated until the desired number of points is reached or the diameter di for each point exceeds the threshold. The resulting simplified point set is denoted by S = {sj , j = 1, . . . , ns }, and sj is called simulation point. It is important to select a proper value of k, too small k may influence the quality of simplification, while too big k will increase the computational cost. In our experiments, preferable is in the interval [10 − 20].

4 4.1

Simulation Based on Mass-Spring System Structure of the Springs

Since no explicit connectivity information is known for the simplified pointsampled surface, traditional mass-spring system [16] can not be applied directly. Here the stretching and bending springs are constructed based on the region Ri corresponding to si . For si ∈ S, the vertices of the region Ri are {q ji r , r =

36

Z. Su et al.

a) Delaunay triangulation for case 1)

b) Delaunay triangulation for case 2)

Fig. 2. Delaunay triangulation of the projection points on the tangent plane

1, . . . , m}, which are the projection points of {sji r , r = 1, . . . , m}. Assuming that q ji r are sorted counter clockwise. The stretching springs link si and sji r and the j bending springs connect sji r and si r+2 (Fig. 3). This process is implemented on each point on S, and the structure of the springs is obtained consequently.

a) Stretching springs for case 1) and 2) b) Bending springs for case 1) and 2) Fig. 3. The spring structures (dashed lines)

4.2

Estimation of the Mass

The mass of si is needed for simulation. Note that in region of low sampling density, a simulation point si will represent large mass, whereas smaller mass in region of higher sampling density. Since the area of region Ri reflects the sampling density, the mass of si can be estimated by mi =

1 ρSRi 3

(1)

where SRi is the area of region Ri , and ρ is the mass density of the surface. 4.3

Forces

According to the Hooke’s law, the internal force F s (Si,j ) of spring Si,j linking two mass points si and sj can be written as   I i,j s 0 F s (Si,j ) = −ki,j I i,j − li,j (2) I i,j  s 0 is the stiffness of spring Si,j , I i,j = sj − si , and li,j is the natural where ki,j length of spring Si,j .

Point-Sampled Surface Simulation Based on Mass-Spring System

37

In dynamic simulation, a damping force is often introduced to increase the stability. In our context, the damping force is represented as d (v j − v i ) F d (Si,j ) = ki,j

(3)

d where ki,j is the coefficient of damping, v j and v i are the velocities of sj and si . Appling external forces to the mass-spring system yields realistic dynamics. The gravitational force acting on a mass point si is given by

F g = mi g

(4)

where mi is the mass of the mass point si , g is the acceleration of gravity. A force that connects a mass point to a point in world coordinates r0 is given by F r = kr (r 0 − si )

(5)

where kr is the spring constant. Similar to [18], other types of external forces, such as the effect of viscous fluid, can be introduced into our system. 4.4

Simulation

The mass-spring system is governed by Newton’ s Law. For a mass point si , there exists the equation F i = mi ai = mi

d2 xi dt2

(6)

where mi , xi , and ai are the mass, displacement, acceleration of si respectively. A large number of integration schemes can be used to Eq. (6). Explicit schemes are easy to implement and computationally cheap, but stable only for small time steps. In contrast, implicit schemes are unconditionally stable at the cost of computational and memory consumption. We use explicit Euler scheme for simplicity in our system.

5

Deformation of the Original Point-Sampled Surface

The deformation of the original point-sampled surface can be obtained by the deformation of the simplified point-sampled surface. Let us consider the xcomponent u of the displacement field u = (u, v, w). Similar to [13], we compute the displacement of pi through the simulation points in its neighborhood. While the simulation points sampled from an underlying surface, it may be singular due to coplanarity if we use moving least square fitting to compute the displacement directly. We treat the tangent plane at pi as the reference domain. The simulation points sji , j = 1, . . . , k in the neighborhood of pi are projected onto the reference plane, with corresponding projection points q ji , j = 1, . . . , k, and (¯ xj , y¯j ) , j = 1, . . . , k are the coordinates of q ji , j = 1, . . . , k in the local coordinate system with origin pi . Let the x-component u is given by u (¯ x, y¯) = a0 + a1 x ¯ + a2 y¯

(7)

38

Z. Su et al.

The parameters al , l = 0, 1, 2 can be obtained by minimizing Ei =

k 

w (rj ) (uj − a0 − a1 x ¯j − a2 y¯j )2

(8)

j=1

where rj is the distance between pi and q ji , w (·) is a Gaussian weighting function w (rj ) = exp (−rj2 /h2 ). Then ui = u (0, 0) = a0 . Similarly, v and w can be computed. Since the shape of the point-sampled surface is changed due to the displacements of the sample points, the normal of the underlying surface will change consequently. The normal can be computed by the covariance analysis as mentioned above. The point sampling density will be changed due to the deformation, we use the resampling scheme of [1] to maintain the surface quality.

6

Experimental Results

We implement the proposed method on a PC with Pentium IV 2.0GHz CPU and 512MB RAM. Experiments are performed on both closed and open surfaces, shown as Fig. 4. The sphere is downloaded from the website of PointShop3D and composed of 3203 surfels. For the modeling of the hat, the original point-sampled surface is sampled from the lower part of a sphere, and a stretching force acted on the middle of the point-sampled surface produce the hat. We also produce another interesting example, the logo “CGGM 07”, which are both produced by applying force on the point-sampled surfaces (Fig. 5). The simplification and the construction of mass-spring system can be performed as preprocess, and the simulation points is much less in the simplified surface than the original point-sampled surface, so the simulation is very efficient. The performance of simulation is illustrated in Table 1. The main computational cost is the transference of the displacement from the simplified surface to the original pointsampled surface and the normal computation of the deformed point-sampled surface. Compared to the global parameterization in [7], the local construction of mass-spring system makes the simulation more efficient. The continuum-based method [13] presented the modeling of volumetric objects, while our method can deal with both volumetric objects using their boundary surface and sheet like objects. Table 1. The simulation time Number of simulation points 85 273 327 Simulation time per step (s) 0.13 0.25 0.32

Point-Sampled Surface Simulation Based on Mass-Spring System

a) The deformation of a sphere

39

b) The modeling of a hat

Fig. 4. Examples of our method

Fig. 5. The “CGGM 07” logo

7

Conclusion

As an easy implemented physically based method, mass-spring systems have been investigated deeply and have been used widely in computer graphics. However, it can not be used to point-sampled surfaces due to the lack of connectivity information and the difficulty of constructing mass-spring system. We solve the problem of the construction of mass-spring system for point-sampled surface based on projection and present a novel mass-spring based simulation method for point-sampled surface. A Delaunay based simplification algorithm facilitates the construction of mass-spring system and ensures the efficiency of the simulation method. Further study focuses on the simulation with adaptive topology. And the automatic determination of the simplification threshold should be investigated to ensure suitable tradeoff between accuracy and efficiency in the future.

Acknowledgement This work is supported by the Program for New Century Excellent Talents in University grant (No. NCET-05-0275), NSFC (No. 60673006) and an INBRE grant (5P20RR01647206) from NIH, USA.

References 1. Alexa M., Behr J., Cohen-Or D., Fleishman S., Levin D., Silva C. T.: Computing and rendering point set surfaces. IEEE Transactions on Visualization and Computer Graphics 9(2003) 3-15 2. Bao Y., Guo X., Qin H.: Physically-based morphing of point-sampled surfaces. Computer Animation and Virtual Worlds 16 (2005) 509 - 518

40

Z. Su et al.

3. Clarenz U., Rumpf M., Telea A.: Finite elements on point based surfaces, Proceedings of Symposium on Point-Based Graphics (2004) 4. Fleishman S., Cohen-Or D., Silva C. T.: Robust moving least-squares fitting with sharp features. ACM Transactions on Graphics 24 (2005) 544-552 5. Gibson S.F., Mirtich B.: A survey of deformable models in computer graphics. Technical Report TR-97-19, MERL, Cambridge, MA, (1997) 6. Guo X., Hua J., Qin H.: Scalar-function-driven editing on point set surfaces. IEEE Computer Graphics and Applications 24 (2004) 43 - 52 7. Guo X., Li X., Bao Y., Gu X., Qin H.: Meshless thin-shell simulation based on global conformal parameterization. IEEE Transactions on Visualization and Computer Graphics 12 (2006) 375-385 8. Guo X., Qin H.: Real-time meshless deformation. Computer Animation and Virtual Worlds 16 (2005) 189 - 200 9. Kobbelt L., Botsch M.: A survey of point-based techniques in computer graphics. Computer & Graphics, 28 (2004) 801-814 10. Miao Y., Feng J., Xiao C., Li H., Peng Q., Detail-preserving local editing for pointsampled geometry. H.-P Seidel, T. Nishita, Q. Peng (Eds), CGI 2006, LNCS 4035 (2006) 673-681 11. Miao L., Huang J., Zheng W., Bao H. Peng Q.: Local geometry reconstruction and ray tracing for point models. Journal of Computer-Aided Design & Computer Graphics 18 (2006) 805-811 12. Moenning C., Dodgson N. A.: A new point cloud simplification algorithm. Proceedings 3rd IASTED Conference on Visualization, Imaging and Image Processing, Benalm´ adena, Spain (2003) 1027-1033 13. M¨ uller M., Keiser R., Nealen A., Pauly M., Pauly M., Gross M., Alexa M.: Point based animation of elastic, plastic and melting objects, Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation (2004) 141-151 14. Pauly M., Gross M., Kobbelt L.: Efficient simplification of point-sampled surfaces. Proceedings of IEEE Visualization (2002) 163-170 15. Pauly M., Keiser R., Kobbelt L., Gross M.: Shape modeling with point-sampled geometry. ACM Transactions on Graphics 22(2003) 641-650 16. Provot X.: Deformation constraints in a mass-spring model to describe rigid cloth behavior. Proc of Graphics Interface (1995) 147-154. 17. Su Z., Li L., Zhou X.: Arc-length preserving curve deformation based on subdivision. Journal of Computational and Applied Mathematics 195 (2006) 172-181 18. Terzopoulos D., Platt J., Barr A., Fleischer K.: Elastically deformable models. Proc. SIGGRAPH (1987) 205-214 19. Xiao C., Miao Y., Liu S., Peng Q., A dynamic balanced flow for filtering pointsampled geometry. The Visual Computer 22 (2006) 210-219 20. Xiao C., Zheng W., Peng Q., Forrest A. R., Robust morphing of point-sampled geometry. Computer Animation and Virtual Worlds 15 (2004) 201-210 21. Zwicker M., Pauly M., Knoll O., Gross M.: Pointshop3d: An interactive system for point-based surface editing. ACM Transactions on Graphics 21(2002) 322- 329

Sweeping Surface Generated by a Class of Generalized Quasi-cubic Interpolation Spline Benyue Su1,2 and Jieqing Tan1 Institute of Applied Mathematics, Hefei University of Technology, Hefei 230009, China Department of Mathematics, Anqing Teachers College, Anqing 246011, China [email protected] 1

2

Abstract. In this paper we present a new method for the model of interpolation sweep surfaces by the C 2 -continuous generalized quasicubic interpolation spline. Once given some key position, orientation and some points which are passed through by the spine and initial cross-section curves, the corresponding sweep surface can be constructed by the introduced spline function without calculating control points inversely as in the cases of B-spline and B´ezier methods or solving equation system as in the case of cubic polynomial interpolation spline. A local control technique is also proposed for sweep surfaces using scaling function, which allows the user to change the shape of an object intuitively and effectively. On the basis of these results, some examples are given to show how the method is used to model some interesting surfaces.

1

Introduction

Sweeping is a powerful technique to generate surfaces in CAD/CAM, robotics motion design and NC machining, etc. There has been abundant research in the modeling of sweeping surfaces and their applications. Hu and Ling ([2], 1996) considered the swept volume of a moving object which can be constructed from the envelope surfaces of its boundary. In this study, these envelope surfaces are the collections of the characteristic curves of the natural quadric surfaces. Wang and Joe ([13], 1997) presented sweep surface modeling by approximating a rotation minimizing frame. The advantages of this method lie in the robust computation and smoothness along the spine curves. J¨ uttler and M¨ aurer ([5], 1999) constructed rational representations of sweeping surfaces with the help 

This work was completed with the support by the National Natural Science Foundation of China under Grant No. 10171026 and No. 60473114, and in part by the Research Funds for Young Innovation Group, Education Department of Anhui Province under Grant No. 2005TD03, and the Anhui Provincial Natural Science Foundation under Grant No. 070416273X, and the Natural Science Foundation of Anhui Provincial Education Department under Grant No. 2006KJ252B, and the Funds for Science & Technology Innovation of the Science & Technology Department of Anqing City under Grant No. 2003-48.

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 41–48, 2007. c Springer-Verlag Berlin Heidelberg 2007 

42

B. Su and J. Tan

of the associated rational frames of PH cubic curves and presented sufficient conditions ensuring G1 continuity of the sweeping surfaces. Schmidt and Wyvill ([9], 2005) presented a technique for generating implicit sweep objects that support direct specification and manipulation of the surface with no topological limitations on the 2D sweep template. Seong, Kim et. al ([10], 2006) presented an efficient and robust algorithm for computing the perspective silhouette of the boundary of a general swept volume. In computer graphics, many advanced techniques using sweeping surfaces ([1], [3], [4]) have been applied to the deformation, NC simulation, motion traced and animation, including human body modeling and cartoon animation, etc. Yoon and Kim ([14], 2006) proposed a approach to the freeform deformation(FFD) using sweeping surfaces, where a 3D object was approximated with sweep surfaces and it was easy to control shape deformations using a small number of sweep parameters. Li, Ge and Wang ([6], 2006) introduced a sweeping function and applied it to the surface deformation and modeling, where the surface can be pulled or pushed along a trajectory curve. In the process of constructing sweep surface, the hard work in modeling are to present simple objects and refine them towards the desired shapes, where the construction of the spine and across-section curves and the design of the moving frame ([8]) are very important. Frenet frame, generalization translation frame and rotation-minimizing frame et. al ([4], [5], [7], [13], [14]) can all be applied to solve these problem thoroughly. In general, the spine curve can be presented by B´ezier and B-spline methods. But they have many difficulties in calculating the data points conversely in order to interpolate given points. The main contribution of this paper is the development of a new method based on a class of generalized quasi-cubic interpolation spline. This approach has the following features: • The spine and across-section curves are C 2 continuous and pass through some given points by the user without calculating the control points conversely as in the cases of B-spline and B´ezier methods or solving equation system as in the case of cubic polynomial interpolation spline. • A local control technique is proposed by the defined spline. It is implemented flexibly and effectively on the computer-human interaction. • The moving frame is smoothness and can be established associated with the spine curve uniformly using our method. The rest of this paper is organized as follows: A C 2 -continuous generalized quasi-cubic interpolation spline is introduced in Sect. 2. We present a new method for the sweep surface modeling by the generalized quasi-cubic interpolation spline in Sect. 3. Some examples of shape modeling by the introduced method are given in Sect. 4. Finally, we conclude the paper in Sect. 5.

Sweeping Surface

2

43

C 2 -Continuous Generalized Quasi-cubic Interpolation Spline

Definition 1. [11] Let b0 , b1 , b2 , · · · , bn+2 , (n ≥ 1), be given control points. Then a generalized quasi-cubic piecewise interpolation spline curve is defined to be pi (t) =

3 

Bj,3 (t)bi+j , t ∈ [0, 1], i = 0, 1, · · · , n − 1 ,

(1)

j=0

where B0,3 (t) =

1 4

B1,3 (t) =

− 14

B2,3 (t) =

1 4



B3,3 (t) =

3 4



+

t 2

− sin π2 t −

cos πt +

1 2π

cos πt −

sin πt ,

π 2t

+

t 2

+ sin π2 t −

1 4

cos πt −

1 2π

sin πt ,

t 2

− cos π2 t +

1 4

cos πt +

1 2π

sin πt .

+

t 2

1 4

+ cos

1 4

1 2π

sin πt , (2)

From (2), we know that Bi,3 (t), (i = 0, 1, 2, 3), possess properties similar to those of B-spline base functions except the positive property. Moreover, pi (t) interpolates the points bi+1 and bi+2 . That is, pi (0) = bi+1 , pi (1) = bi+2 ,

(3)

From (1) and (2), we can also get pi (0) = ( π2 − 1)(bi+2 − bi ), pi (0) = So

π2 4 (bi

pi (1) = ( π2 − 1)(bi+3 − bi+1 ) ,

− 2bi+1 + bi+2 ), pi (1) = (l)

π2 4 (bi+1

− 2bi+2 + bi+3 ) .

(l)

pi (1) = pi+1 (0), pi (1) = pi+1 (0), l = 1, 2, i = 0, 1, · · · , n − 2 .

(4)

(5)

Therefore, the continuity of the quasi-cubic piecewise interpolation spline curves is established up to second derivatives. Besides this property, the quasi-cubic piecewise interpolation spline curves also possess symmetry, geometric invariability and other properties, the details of these properties can be found in our another paper ([11]).

3

Sweep Surface Modeling

Given a spine curve P (t) in space and a cross-section curve C(θ), a sweep surface W (t, θ) can be generated by W (t, θ) = P (t) + R(t)(s(t) · C(θ)),

(6)

where P (t) is a spine curve, R(t) is an orthogonal matrix representing a moving frame along P (t), s(t) is scaling function. Geometrically, a sweep surface W (t, θ)

44

B. Su and J. Tan

is generated by sweeping C(θ) along P (t) with moving frame R(t). cross-section curve C(θ) is in the 2D or 3D space which passes through the spine curve P (t) during sweeping. So the key problems in sweep surface generation are to construct the spine and cross-section curves P (t), C(θ) and to determine the moving frame R(t). Given a initial cross-sections Cj (θ) moving along a spine curve Pi (t). Each given position is associated with a local transformation Ri (t) on the Cj (θ). The sweep surface is generated by interpolating these key cross-sections Ci (θ) at these special positions by the user: Pi (t) + ⎞ Ri (t)(s Wi,j (t, θ) = ⎛ ⎛ i (t) · Cj (θ)) ⎞⎛ ⎞ xi (t) r11,i (t) r12,i (t) r13,i (t) sxi (t)Cxj (θ) = ⎝ yi (t) ⎠ + ⎝ r21,i (t) r22,i (t) r23,i (t) ⎠ ⎝ syi (t)Cyj (θ) ⎠ , zi (t) r31,i (t) r32,i (t) r33,i (t) 0

(7)

where the s(t) is scaling function, which can be used to change the shapes of cross-sections to achieve local deformations. 3.1

The Construction of Spine and Cross-Section Curves

From the above discussions, we know that once some places that the crosssections will pass through are given, a spine curve can be constructed to interpolate these places (points) as follows: Pi (t) = (xi (t), yi (t), zi (t))T =

3 

Bj,3 (t)bi+j , t ∈ [0, 1] ,

j=0

(8)

i = 0, 1, · · · , n − 1 , where bi , i = 0, 1, · · · , n + 2, (n ≥ 1), are given points (positions) by user, and Bj,3 (t), (j = 0, 1, 2, 3), are generalized quasi-cubic piecewise interpolation spline base functions. Similarly, if the explicit expression of cross-section curves are unknown in advance. But we know also the cross-section curves pass through some given points, then we can define the cross-section curves by Cj (θ) = (Cxj (θ), Cyj (θ), 0)T = j = 0, 1, · · · , m − 1 ,

3 

Bk,3 (θ)qj+k , θ ∈ [0, 1] ,

k=0

(9)

where qj , j = 0, 1, · · · , m + 2, , (m ≥ 1), are given points (positions) by user. In order to improve the flexibility and local deformation of the interpolation sweeping surfaces, we introduce scaling functions defined by si (t) = (sxi (t), syi (t), 0)T =

3  j=0

Bj,3 (t)si+j , t ∈ [0, 1] ,

(10)

i = 0, 1, · · · , n − 1 , si , s˜i , 0)T , i = 0, 1, · · · , n+2, (n ≥ 1) . sˆi and s˜i are n+3 nonnegative where si = (ˆ real numbers respectively, which are called scaling factors. Bj,3 (t), (j = 0, 1, 2, 3), are generalized quasi-cubic piecewise interpolation spline base functions.

Sweeping Surface

3.2

45

The Moving Frame

In order to interpolate their special orientations of key cross-sections, we can find a proper orthogonal matrix sequence R(t) as a series of moving frame, such that R(t) interpolate the given orthogonal matrices at the time t = ti . Therefore, the interpolation problem lie in R(ti ) = Ri , where Ri are the given orthogonal matrices at t = ti . For the given positions of the moving frames (Pi , Rxi , Ryi , Rzi ), i = 0, 1, · · · , n − 1, we interpolate the translation parts Pi by generalized quasi-cubic interpolation spline introduced in the above section, and we can also interpolate three orthogonal coordinates (Rxi , Ryi , Rzi ) homogeneously by the generalized quasi-cubic interpolation spline (Fig.1(a)). Namely, Ri (t) = (Rxi (t), Ryi (t), Rzi (t))T =

3 

Bj,3 (t)(Rxi+j , Ryi+j , Rzi+j )T ,

j=0

(11)

t ∈ [0, 1], i = 0, 1, · · · , n − 1 ,

3 2 1 0 1 0.5

0

0

1

2

3

4

5

6

7

8

9

(a)

(b)

Fig. 1. (a) The moving frame on the different position, dashed line is spine curve. (b) The sweep surface associated with open cross-section curve.

Notes and Comments. Since (Rxi (t), Ryi (t), Rzi (t)) defined by Eq.(11) usually does not form an accurate orthogonal coordinate system at t = ti , we shall renew it by the Schmidt orthogonalization or an approximation of the orthogonal one with a controllable error. We can also convert the corresponding orthogonal matrices into the quaternion forms, then interpolate these quaternions by the (11) similarly, at last, the accurate orthogonal coordinate system can be obtained by the conversion inversely. From the (7), (8) and (11), we know that for the fixed θ = θ∗ , Wi,j (t, θ∗ ) =

3 

Bk,3 (t)(bi+k + Ri+k (si+k · qj∗ )) ,

k=0

where qj∗ = qj (θ∗ ) , and for the fixed t = t∗ ,

(12)

46

B. Su and J. Tan

Wi,j (t∗ , θ) = Pi∗ + Ri∗

3 

Bk,3 (t)(s∗i · qj+k ) ,

(13)

k=0

where Pi∗ = Pi (t∗ ), Ri∗ = Ri (t∗ ) and s∗i = si (t∗ ) . Since qj∗ are constant vectors, we get that Wi,j (t, θ∗ ) are C 2 -continuous and the points on curves Wi,j (t, θ∗ ) can be obtained by doing the transformation of stretching, rotation and translation on the point qj∗ . The cross-section curves Wi,j (t∗ , θ) at the t = t∗ can also be attained by the stretching, rotation and translation transformation on the initial cross-section curves Cj (θ). Moveover, by computing the first and second partial derivatives of Wi,j (t, θ), we get ∂l Wi,j (t, θ) ∂tl

= Pi (t) +

∂l ∂θ l Wi,j (t, θ)

= Ri (t)(si (t) · Cj (θ)) ,

(l)

dl (Ri (t)(si (t) dtl

· Cj (θ))) ,

l = 1, 2 .

(l)

(14)

Then Wi,j (t, θ) are C 2 -continuous with respect to t and θ by the (5) and (14).

4

The Modeling Examples

Example 1. Given interpolating points of spine curve by b0 = (0, 0, 1),b1 = (0, 0, 1),b2 = (1, 0, 2.5),b3 = (2, 0, 3),b4 = (3, 0, 3),b5 = (4, 0, 2) and b6 = (4, 0, 2). , Suppose the initial cross-section curves pass through the points (cos (i−1)π 6 (i−1)π sin 6 ), i = 1, 2, · · · , 13. The rotation angle at the four positions is 0, π/3, π/2 and 2π/3 respectively. Scaling factors are selected by sˆi = s˜i ≡ 1. Then we get sweeping interpolation surface as in the Fig.2(a) and Fig.3.

4

3.5

3

3

2

2.5

2

Trajectory curves

1

1.5

0

Object curves

1 1 0 −1

−1

−0.5

0

0.5

1

1.5

2

2.5

3

3.5

4

1 0 −1 −1

(a)

0

1

2

3

4

5

6

7

8

(b)

Fig. 2. The four key positions of cross-section curve during sweeping. (a) is the figure in example 1 and (b) is the figure in example 2.

Example 2. Given interpolation points of spine curve by b0 = (0, 0, 0),b1 = (0, 0, 1),b2 = (2, 0, 2.5),b3 = (4, 0, 3),b4 = (6, 0, 3),b5 = (8, 0, 2) and b6 = (8, 0, 2). , sin (i−1)π ), i = The initial cross-section curve interpolates the points (cos (i−1)π 6 6 1, 2, · · · , 13. The rotation angle at the four positions is 0, π/6, π/4 and π/2 respectively. The scaling factors are chosen to be sˆi = s˜i = {1.4, 1.2, 1, 0.8, 0.6, 0.4, 0.2}. Then we get sweeping interpolation surface as in the Fig.2(b) and Fig.4.

Sweeping Surface

(a)

47

(b)

Fig. 3. The sweep surface modeling in example 1, (b) is the section plane of figure (a)

(a)

(b)

Fig. 4. The sweep surface modeling in example 2, (b) is the section plane of figure (a)

Example 3. The interpolation points of spine curve and rotation angles are adopted as in the example 2. The initial cross-section curve interpolates the points q0 = (−3, 1), q1 = (−2, 2), q2 = (−1, 1), q3 = (1, 2), q4 = (2, 1), q5 = (3, 2). The scaling factors are chosen to be sˆi = s˜i ≡ 1. Then we get the sweeping interpolation surface by open cross-section curve as in the Fig.1(b).

5

Conclusions and Discussions

As mentioned above, we have described a new method for constructing interpolation sweep surfaces by the C 2 continuous generalized quasi-cubic interpolation spline. Once given some key position and orientation and some points which are passed through by the spine and initial cross-section curves, we can construct corresponding sweep surface by the introduced spline function. We have also proposed a local control technique for sweep surfaces using scaling function, which allows the user to change the shape of an object intuitively and effectively. Note that, in many other applications of sweep surface, the cross-section curves are sometimes defined on circular arcs or spherical surface, etc. Then we can construct the cross-section curves by the circular trigonometric Hermite interpolation spline introduced in our another paper ([12]). On the other hand, in order to avoid a sharp acceleration of moving frame, we can use the chord length parametrization in the generalized quasi-cubic interpolation spline.

48

B. Su and J. Tan

In future work, we will investigate the real-time applications of the surface modeling based on the sweep method and interactive feasibility of controlling the shape of freeform 3D objects .

References 1. Du, S. J., Surmann, T., Webber, O., Weinert, K. : Formulating swept profiles for five-axis tool motions. International Journal of Machine Tools & Manufacture 45 (2005) 849–861 2. Hu, Z. J., Ling, Z. K. : Swept volumes generated by the natural quadric surfaces. Comput. & Graphics 20 (1996) 263–274 3. Hua, J., Qin, H. : Free form deformations via sketching and manipulating the scalar fields. In: Proc. of the ACM Symposium on Solid Modeling and Application, 2003, pp 328–333 4. Hyun, D. E., Yoon, S. H., Kim, M. S., J¨ uttler, B. : Modeling and deformation of arms and legs based on ellipsoidal sweeping. In: Proc. of the 11th Pacific Conference on Computer Graphics and Applications (PG 2003), 2003, pp 204–212 5. J¨ uttler, B., M¨ aurer C. : Cubic pythagorean hodograph spline curves and applications to sweep surface modeling. Computer-Aided Design 31 (1999) 73–83 6. Li, C. J., Ge, W. B., Wang, G. P. : Dynamic surface deformation and modeling using rubber sweepers. Lecture Notes in Computer Science 3942 (2006) 951–961 7. Ma, L. Z., Jiang, Z. D., Chan, Tony K.Y. : Interpolating and approximating moving frames using B-splines. In: Proc. of the 8th Pacific Conference on Computer Graphics and Applications (PG 2000), 2000, pp 154–164 8. Olver, P. J. : Moving frames. Journal of Symbolic Computation 36 (2003) 501–512 9. Schmidt, R., Wyvill, B. : Generalized sweep templates for implicit modeling. In: Proc. of the 3rd International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia, 2005, pp 187–196 10. Seong, J. K., Kim, K. J., Kim, M. S., Elber, G. : Perspective silhouette of a general swept volume. The Visual Computer 22 (2006) 109–116 11. Su, B. Y., Tan, J. Q. : A family of quasi-cubic blended splines and applications. J. Zhejiang Univ. SCIENCE A 7 (2006) 1550–1560 12. Su, B. Y., Tan, J. Q. : Geometric modeling for interpolation surfaces based on blended coordinate system. Lecture Notes in Computer Science 4270 (2006) 222–231 13. Wang, W. P., Joe, B. : Robust computation of the rotation minimizing frame for sweep surface modeling. Computer-Aided Design 23 (1997) 379–391 14. Yoon, S. H., Kim, M. S. : Sweep-based Freeform Deformations. Computer Graphics Forum (Eurographics 2006) 25 (2006) 487–496

An Artificial Immune System Approach for B-Spline Surface Approximation Problem Erkan Ülker1 and Veysi İşler2 1

Selçuk University, Department of Computer Engineering, 42075 Konya, Turkey [email protected] 2 Middle East Technical University, Department of Computer Engineering, 06531 Ankara, Turkey [email protected]

Abstract. In surface fitting problems, the selection of knots in order to get an optimized surface for a shape design is well-known. For large data, this problem needs to be dealt with optimization algorithms avoiding possible local optima and at the same time getting to the desired solution in an iterative fashion. Many computational intelligence optimization techniques like evolutionary optimization algorithms, artificial neural networks and fuzzy logic have already been successfully applied to the problem. This paper presents an application of another computational intelligence technique known as “Artificial Immune Systems (AIS)” to the surface fitting problem based on BSplines. Our method can determine appropriate number and locations of knots automatically and simultaneously. Numerical examples are given to show the effectiveness of our method. Additionally, a comparison between the proposed method and genetic algorithm is presented.

1 Introduction Since B-spline curve fitting for noisy or scattered data can be considered as a nonlinear optimization problem with high level of computational complexity [3, 4, 6], non-deterministic optimization strategies should be employed. Here, methods taken from computational intelligence offers promising results for the solutions of this problem. By the computational intelligence techniques as is utilized in this paper, we mean the strategies that are inspired from numerically based Artificial Intelligence systems such as evolutionary algorithms and neural networks. One of the most conspicuous and promising approaches to solve this problem is based on neural networks. Previous studies are mostly focused on the traditional surface approximation [1] and the first application of neural networks to this field is taken place in [15]. Later on, the studies that include Kohonen networks [8, 9, and 12], SelfOrganized Maps [13, 14] and Functional networks [5, 7, and 10] provided extension of studies of surface design. Evolutionary algorithms are based on natural selection for optimization with multi-aims. Most of the evolutionary optimization techniques such as Genetic Algorithm (GA) [3, 6, and 17], Simulated Annealing [16] and Simulated Evolution [17, 18, and 19] are applied to this problem successfully. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 49–56, 2007. © Springer-Verlag Berlin Heidelberg 2007

50

E. Ülker and V. İşler

This paper presents the application of one of the computational intelligence techniques called “Artificial Immune System (AIS)” to the surface fitting problem by using B-Splines. Individuals are formed by treating knot placement candidates as antibody and the continuous problem is solved as a discrete problem as in [3] and [6]. By using Akaike Information Criteria (AIC), affinity criterion is described and the search is implemented from the good candidate models towards the best model in each generation. The proposed method can describe the placement and numbers of the knot automatically and concurrently. In this paper, the numerical examples are given to show the effectiveness of the proposed method. Moreover, a comparison between the proposed method and the genetic algorithm is presented.

2 B-Spline Surface Approximation Geometry fitting can be formulized as the minimization of the fitting error under some accuracy constraints in the jargon of mathematics. A typical error scale for parametrical surface fitting is as follows; Q2 =

N

N

y

∑∑ x

i =1

j =1

w i , j {S (x i , y

)− F }

2

j

i, j

(1)

Surface fitting from sample points is also known as surface reconstruction. This paper applies a local fitting and blending approach to this problem. The readers can refer to [1] and [2] for details. A B-Spline curve, C (u), is a vector valued function which can be expressed as: m

C (u ) = ∑ N i ,k (u ) Pi , u ∈ [u k −1 , u m +1 ]

(2)

i =0

where Pi represents control points (vector) and Ni,k is the normal k-degree B-spline basis functions and Ni,k can be defined as a recursive function as follows. ⎧1 if u ∈ [u i , u i +1 ), N i ,1 (u ) = ⎨ and ⎩0 otherwise

N i , k (u ) =

u − ui u −u N i , k −1 (u ) + i + k N i +1, k −1 (u ) u i + k −1 − u i u i + k − u i +1

(3)

where ui represents knots that are shaped by a knot vector and U= {u0,u1,…,um}. Any B-Spline surface is defined as follows: m

n

S (u, v ) = ∑∑ N i ,k (u )* N j ,l (v )* Pi , j

(4)

i = 0 j =0

As can be seen from upper equations, B-spline surface is given as unique with the degree, knot values and control points. Then, surface is formed by these parameters. Input is a set of unorganized points in surface reconstruction problems so the degree of the surface, knots and control points are all unknown. In equation (3), the knots not only exist in the dividend but also exist in the denominator of the fractions. Thus a spline surface given as in equation (4) is a function of nonlinear knots. Assume that

An Artificial Immune System Approach for B-Spline Surface Approximation Problem

51

the data to be fitted are given as if they are on the mesh points of the D=[a,b]x[c,d] rectangular field on the x-y plane. Then the expression like below can be written [3]: Fi , j = f (xi , y j ) + ε i , j ,

(i = 1,2,", N

x

; j = 1,2,", N y ).

(5)

In this equation, f(x,y) is the function exists at the base of data, Nx and Ny represent the number of the data points in x and y directions respectively and εi,j is a measurement error. Equation (4) is adjusted or fitted to given data by least square methods with equation (5). For parameterization of B-spline curve in Equation (2) and surface in Equation (4) must be produced by preferring one of the parameterization methods among uniform, chordal or centripetal methods. Then, the squares of remainders or residuals are calculated by Equation (1). The lower index of the Q2 means the dimension of the data. The objective that will be minimized in B-spline surface fitting problem is the function in Equation (1) and the variables of the object function are B-spline coefficients and interior knots. B-spline coefficients are linear parameters. However, interior knots are nonlinear parameters since S(u,v) function is a nonlinear knots function. This minimization problem is known as multi-modal optimization problem [4].

3 B-Spline Surface Approximation by Artificial Immune System AIS were emerged as a new system that combines varieties of biological based computational methods such as Artificial Neural Network and Artificial Life in 1990s. AIS were used in very diverse areas such as classification, learning, optimization, robotic and computer security [11]. We need the following components to construct the AIS: (i) The representation of system parts, (ii) A mechanism to compute the interaction of system parts with each other and with the environment, (iii) Adoption procedures. Different methods were employed for each of these components in the algorithms that have been developed until now. We decided that Clonal Selection algorithm was the best for the purpose of our study. Distance criterion is used for determination of mutual interaction degree of Antigen-Antibody as a scale. If antibody and antigen are represented as Ab= and Ag=, respectively, Euclid distance between Ab and Ag is calculated as follows: D =

L

∑ ( Ab i =1

i

− Ag

i

)2

(6)

B-spline surface fitting problem is to approximate the B-spline surface that approximate a target surface in a certain tolerance interval. Assume that object surface is defined as Nx x Ny grid type with ordered and dense points in 3D space and the knots of the B-spline surface that will be fitted are nx x ny grid that is subset of Nx x Ny grid. Degrees of curves, mx and my, will be entered from the user. Given number of points Nx x Ny is assigned to L that is dimensions of the Antigen and Antibody. Each bit of Antibody and Antigen is also called their molecule and is equivalent to a data point. If the value of a molecule is 1 in this formulation then a knot is placed to a suitable data point otherwise no knot is placed. If the given points are lied down

52

E. Ülker and V. İşler

between [a,b] and [c,d] intervals, nx x ny items of knots are defined in this interval and called as interior knots. Initial population includes k Antibody with L numbers of molecules. Molecules are set as 0 and 1, randomly. For recognizing (response against Antigen) process, affinity of Antibody against Antigen were calculated as in Equation (7) that uses the distance between AntibodyAntigen and AIC that is preferred as fitness measure in [3] and [6] references. Affinity = 1 − (AIC Fitnessavrg )

(7)

In Equation (7), Fitnessavrg represents the arithmetical average of AIC values of all Antibodies in the population and calculated as follow. If the AIC value of any of the individual is greater than Fitnessavrg then Affinity is accepted as zero (Affinity=0) in Equation (7). K

Fitnessavrg = ∑ AICi K i =1

(8)

Where, K is the size of the population and AICi is the fitness measure of the ith antibody in the population. AIC is given as below.

AIC2 = N x N y log e Q2 + 2{(n x + m x )(n y + m y ) + n x + n y }

(9)

The antibody which is ideal solution and the exact complementary of the Antigen is the one whose affinity value is nearest to 1 among the population (in fact in memory). Euclid distance between ideal Antibody and Antigen is equal to zero. In this case the problem becomes not surface approximation but surface interpolation. In order to integrate clonal selection algorithm to this problem some modification must be carried out on the original algorithm. The followings are step by step explanation of the modifications made on the algorithm and how these modifications applied to above mentioned algorithm. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

Enter the data points to be fitted (In a form of Nx and Ny grid). Enter the control parameters. Build initial Antibody population with random molecules. If the population is built as the first time, create memory array (save all antibodies). Otherwise, update antibody population and memory cells and develop the memory. For each of antibody calculate B-spline and fit it to the given curve. Later on calculate the sum of squares of residuals (Q2). For each of antibody in the population calculate its AIC value and calculate the average AIC value of population. For each of antibody calculate the affinity. Choose the best antibody according to the affinity and in request antigen and interactions of every antibody. The number of the clones will be K-varieties. Produce matured antibody population by making the molecules change rational with affinity values of clones. Implement mutation according to the mutation rate. Produce new antibody according to the variety ratio. If iteration limit is not reached or antigen is not recognized fully go to step 5.

An Artificial Immune System Approach for B-Spline Surface Approximation Problem

53

4 Experimental Results In order to evaluate the proposed AIS based Automatic Knot Placement algorithm five bivariate test functions were used (see Table 1). These functions were constructed to have a unit standard deviation and a non-negative range. Since the antibodies with highest affinity values are kept in the memory in AIS architecture, the antibody of the memory with the highest affinity for each generation was presented in the results. To see the performance evaluation and approximation speed, genetic algorithm suggested by Sarfaz at al. [6, 17] and proposed algorithm in this study were compared. Knot ratio and operation of making important points immobilize at the knot chromosome are discarded in their algorithm. The developed program has the flexibility of entering the B-spline surface orders from the user. To test the quality of the proposed model, Root Mean Square (RMS) error were calculated for M and N values from 5 to 10 for 400 (20x20) training data points from five surfaces defined as above. Initial population is fed till 500 generations. Increase in the number of generation makes increase in fitting (error decreases). The slot of the approximated curves indicates the probability of having still approximation in next generation. Table 2 shows the statistics of GA and AIS optimization execution. RMS errors between point clouds and modeled surface based on the best chromosome in GA and antibodies in population of memory in AIS were given in Table 3 for four test functions (Surface I – Surface IV). The analyses are implemented for all surfaces in Table 1. M x N mesh is determined as randomly for Surface II – Surface IV. At Surface II shown in Table 3, the choices for M and N are fitting to MxN=8x8. Similarly, the choices of M and N for Surface III and IV are fitting to MxN=9x9 and MxN=10x10, respectively. Table 1. Five test functions for the bivariate setting

f ( x1 , x 2 ) = 10 .391{( x1 − 0.4 )( x 2 − 0.6 ) + 0.36}

{ (

)}

f ( x1 , x2 ) = 24.234 r 2 0.75 − r 2 , r 2 = (x1 − 0.5) + ( x2 − 0.5)

{

(

2

)}

2

f ( x1 , x2 ) = 42.659 0.1 + x1 0.05 + x1 − 10x1 x2 + 5x4 , x1 = x1 − 0.5, x2 = x2 − 0.5

[

4

f (x1 , x2 ) = 1.3356 1.5(1 − x1 ) + e

[

{

2 x1 −1

2

{

2

4

}

}

]

f ( x1 , x2 ) = 1.9 1.35 + e Sin 13( x1 − 0.6) + e Sin(7 x2 ) x1

{

Sin 3π (x1 − 0.6) + e 3( x2 −0.5 ) Sin 4π (x2 − 0.9) 2

2

x2

2

}]

Table 2. Parameter Set Parameter Mesh Size Population size String length Mutation Rate Crosover Variety Memory Size Generation B-spline’s order

AIS 20x20 20 200 (Antibody cell length) None None 6 (30%) 40 500 Random and user defined

GA 20x20 20 200 (chromosome gen-length) 0.001 0.7 6 (30%) None 500 Random and user defined

E. Ülker and V. İşler

54

Table 3. RMS (x10-2) values of AIS and GA methods for 400 data points from Surface I to Surface IV for different MxN Gen. 1 10 25 50 100 200 300 400 500

Surface I (7x7) Surface II (8x8) G.A. A.I.S. G.A. A.I.S. 8.26 9.34 3.64 3.69 7.99 7.14 3.70 3.59 8.22 5.88 4.25 3.42 8.72 5.61 3.91 2.86 8.34 5.53 3.72 2.71 8.72 5.58 4.01 2.01 8.99 5.29 4.60 1.58 7.63 5.23 3.52 1.52 8.86 5.23 3.99 1.50

Surface III (9x9) G.A. A.I.S. 10.10 10.25 11.25 9.57 9.67 8.71 10.40 7.93 9.97 7.48 10.50 7.06 10.61 7.03 10.38 7.03 10.60 7.00

Surface IV (10x10) G.A. A.I.S. 8.21 8.57 8.76 8.02 8.36 7.61 8.53 7.44 8.78 7.22 9.30 7.10 7.99 7.07 8.45 7.00 8.57 6.95

Table 4. RMS (x10-2) values of AIS and GA methods for 400 data points from Surface V for different MxN. (a) AIS, and (b) GA.

AIS

GA

MxN 5 6 7 8 9 10 5 6 7 8 9 10

5

6 8.6015 8.4809 7.6333 7.8404 7.9077 8.0664 10.501 10.238 9.9913 10.013 10.020 9.3970

7 8.0171 8.4179 7.2749 6.8614 7.8398 6.7625 9.6512 9.8221 9.4922 8.6365 9.1249 9.1297

8 7.9324 8.3722 7.4622 6.4382 6.9039 7.1614 9.1281 9.3189 8.9494 8.6247 8.7523 8.4642

9 7.0035 7.5269 7.2034 6.4288 6.9028 6.3575 9.7179 9.5761 8.2377 8.2134 8.1843 8.2721

10 7.3263 7.3554 6.9558 6.7375 6.8971 6.9640 9.8944 7.7725 8.1184 7.6657 7.3076 7.6331

7.0629 7.1965 6.1804 6.0138 5.7911 6.3637 9.6573 8.5993 8.1649 7.8947 7.4484 7.3366

Table 5. Fitness and RMS statistics of GA and AIS for Surface V Generations

1 10 25 50 100 200 300 400 500

A.I.S. (x10-2) Best Best Max. RMS Fitn. RMS 8.84 806 27.98 7.70 695 8.811 7.37 660 7.793 6.74 589 7.357 6.03 500 6.711 5.92 485 6.085 5.91 484 5.965 5.86 477 5.918 5.79 467 8.490

Avrg. Avrg. Fitn. RMS 1226 16.3 767 8.43 682 7.57 641 7.20 511 6.12 492 5.97 488 5.94 481 5.89 488 5.95

G.A. (x10-2) Best Best Max Avrg. Avrg. RMS Fitn. RMS Fitn. RMS 8.82 804 26.70 1319 17.6 7.96 722 12.43 961 10.8 9.69 879 30.38 1085 12.9 7.93 719 22.33 940 10.6 8.01 727 10.86 891 9.87 9.26 843 12.58 925 10.2 7.69 694 29.60 1043 12.3 8.47 772 11.95 922 10.2 7.93 719 13.31 897 9.95

Table 4 points out Surface V. RMS error was calculated for some M and N values from 5 to 10 for 400 (20x20) training data points. As the reader can evaluate, the error approaches depending on the M and N values are reasonable and the best choice fits to the MxN=9x10 as shown italic in Table 4. According to the knots MxN that offer best

An Artificial Immune System Approach for B-Spline Surface Approximation Problem

55

fitting, the proposed algorithm was also compared with GA based algorithm by Safraz et al. regarding to approximation speed. The outputs of the programs were taken for some generations in the training period. Best and average fitness values of individuals and antibodies according to the related generations of the program outputs were given in Table 5. The graphics of approximations of the proposed AIS approach and GA approach for all generations are given in Fig. 1. The bold line and dotted line represent maximum fitness and average fitness values respectively. G.A.

A.I.S.

2000

2000 1500 Fitness

1000

500

500

Max

Max

491

456

421

386

351

316

281

246

211

176

141

71

Avg.

106

469

433

397

361

325

289

253

181

145

73

109

1

37

217

Generation

Best

1

0

0 Avg.

1000

36

Fitness

1500

Generation

Best

Fig. 1. Parameter optimization based on GA and AIS regarding to the generations

5 Conclusion and Future Work This paper presents an application of another computational intelligence technique known as “Artificial Immune Systems (AIS)” to the surface fitting problem using Bsplines. In this study, original problem like in [3] and [6] was converted to a discrete combinational optimization problem and solved as the strategy of this study. In this paper, it has been clearly shown that proposed AIS algorithm is very useful for the recognition of the good knot automatically. The suggested method can describe the numbers and placements of the knots concurrently. None of the subjective parameters such as error tolerance or a regularity (order) factor and initial places of the knots that are chosen well are not required. There are two basic requirements on each of B-spline surface in iterations for guaranteeing the appropriate approximation in B-spline surface fitting. (1) Its shape must be similar to the target or object surface; (2) its control points must be scattered or its knot points must be determined appropriately. The technique presented in this paper is proved to reduce the necessity of the second requirement. In this study, Clonal Selection Algorithm of AIS is applied to the surface reconstruction problem and various new ways of surface modeling is developed. The big potential of this approach has been shown. For a given set of 3D data points, AIS assists to choice the most appropriate B-spline surface degree and knot points. The authors of this manuscript will use other AIS technique to improve the proposed method in their future studies. The positive or negative effects of other techniques will tried to be obtained and comparisons will be done in the future studies. Additionally, NURBS surfaces will be used to improve the suggested algorithm. This extension is especially important regarding to have the complex optimization of weight of NURBS.

56

E. Ülker and V. İşler

Acknowledgements This study has been supported by the Scientific Research Projects of Selcuk University (in Turkey).

References 1. Weiss, V., Andor, L., Renner, G., Varady, T., Advanced surface fitting techniques, Computer Aided Geometric Design Vol. 19, p. 19-42, (2002). 2. Piegl, L., Tiller, W., The NURBS Book, Springer Verlag, Berlin, Heidelberg, (1997). 3. Yoshimoto, F., Moriyama, M., Harada, T., Automatic knot placement by a genetic algorithm for data fitting with a spline, Proc. of the International Conference on Shape Modeling and Applications, IEEE Computer Society Press, pp. 162-169, (1999). 4. Goldenthal, R., Bercovier, M. Design of Curves and Surfaces by Multi-Objective Optimization, April 2005, Leibniz Report 2005-12. 5. Iglesias, A., Echevarr´ýa, G., Galvez, A., Functional networks for B-spline surface reconstruction, Future Generation Computer Systems, Vol. 20, pp. 1337-1353, (2004). 6. Sarfraz, M., Raza, S.A., Capturing Outline of Fonts using Genetic Algorithm and Splines, Fifth International Conference on Information Visualisation (IV'01) , pp. 738-743, (2001). 7. Iglesias, A., Gálvez, A., A New Artificial Intelligence Paradigm for Computer-Aided Geometric Design, Lecture Notes in Artificial Intelligence Vol. 1930, pp. 200-213, (2001). 8. Hoffmann, M., Kovács E., Developable surface modeling by neural network, Mathematical and Computer Modelling, Vol. 38, pp. 849-853, (2003) 9. Hoffmann, M., Numerical control of Kohonen neural network for scattered data approximation, Numerical Algorithms, Vol. 39, pp. 175-186, (2005). 10. Echevarría, G., Iglesias, A., Gálvez, A., Extending Neural Networks for B-spline Surface Reconstruction, Lecture Notes in Computer Science, Vol. 2330, pp. 305-314, (2002). 11. Engin, O., Döyen, A., Artificial Immune Systems and Applications in Industrial Problems, Gazi University Journal of Science 17(1): pp. 71-84, (2004). 12. Boudjemaï, F., Enberg, P.B., Postaire, J.G., Surface Modeling by using Self Organizing Maps of Kohonen, IEEE Int. Conf. on Systems, Man and Cybernetics, vol. 3, pp. 2418-2423, (2003). 13. Barhak, J., Fischer, A., Adaptive Reconstruction of Freeform Objects with 3D SOM Neural Network Grids, Journal of Computers & Graphics, vol. 26, no. 5, pp. 745-751, (2002). 14. Kumar, S.G., Kalra, P. K. and Dhande, S. G., Curve and surface reconstruction from points: an approach based on SOM, Applied Soft Computing Journal, Vol. 5(5), pp. 55-66, (2004). 15. Hoffmann, M., Várady, L., and Molnar, T., Approximation of Scattered Data by Dynamic Neural Networks, Journal of Silesian Inst. of Technology, pp, 15-25, (1996). 16. Sarfraz, M., Riyazuddin, M., Curve Fitting with NURBS using Simulated Annealing, Applied Soft Computing Technologies: The Challenge of Complexity, Series: Advances in Soft Computing, Springer Verlag, (2006). 17. Sarfraz, M., Raza, S.A., and Baig, M.H., Computing Optimized Curves with NURBS Using Evolutionary Intelligence, Lect. Notes in Computer Science, Volume 3480, pp. 806-815, (2005). 18. Sarfraz, M., Sait, Sadiq M., Balah, M., and Baig, M. H., Computing Optimized NURBS Curves using Simulated Evolution on Control Parameters, Applications of Soft Computing: Recent Trends, Series: Advances in Soft Computing, Springer Verlag, pp. 35-44, (2006). 19. Sarfraz, M., Computer-Aided Reverse Engineering using Simulated Evolution on NURBS, Int. J. of Virtual & Physical Prototyping, Taylor & Francis, Vol. 1(4), pp. 243 – 257, (2006).

Implicit Surface Reconstruction from Scattered Point Data with Noise Jun Yang1,2, Zhengning Wang1, Changqian Zhu1, and Qiang Peng1 1

School of Information Science & Technology Southwest Jiaotong University, Chengdu, Sichuan 610031 China 2 School of Mechanical & Electrical Engineering Lanzhou Jiaotong University, Lanzhou, Gansu 730070 China [email protected], {znwang, cqzhu, pqiang}@home.swjtu.edu.cn

Abstract. This paper addresses the problem of reconstructing implicit function from point clouds with noise and outliers acquired with 3D scanners. We introduce a filtering operator based on mean shift scheme, which shift each point to local maximum of kernel density function, resulting in suppression of noise with different amplitudes and removal of outliers. The “clean” data points are then divided into subdomains using an adaptive octree subdivision method, and a local radial basis function is constructed at each octree leaf cell. Finally, we blend these local shape functions together with partition of unity to approximate the entire global domain. Numerical experiments demonstrate robust and high quality performance of the proposed method in processing a great variety of 3D reconstruction from point clouds containing noise and outliers. Keywords: filtering, space subdivision, radial basis function, partition of unity.

1 Introduction The interest for point-based surface has grown significantly in recent years in computer graphics community due to the development of 3D scanning technologies, or the riddance of connectivity management that greatly simplifies many algorithms and data structures. Implicit surfaces are an elegant representation to reconstruct 3D surfaces from point clouds without explicitly having to account for topology issues. However, when the point sets data generated from range scanners (or laser scanners) contain large noise, especially outliers, some established methods often fail to reconstruct surfaces or real objects. There are two major classes of surface representations in computer graphics: parametric surfaces and implicit surfaces. A parametric surface [1, 2] is usually given by a function f (s, t) that maps some 2-dimensional (maybe non-planar) parameter domain Ω into 3-space while an implicit surface typically comes as the zero-level isosurface of a 3-dimensional scalar field f (x, y, z). Implicit surface models are popular since they can describe complex shapes with capabilities for surface and volume modeling and complex editing operations are easy to perform on such models. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 57–64, 2007. © Springer-Verlag Berlin Heidelberg 2007

58

J. Yang et al.

Moving least square (MLS) [3-6] and radial basis function (RBF) [7-15] are two popular 3D implicit surface reconstruction methods. Recently, RBF attracts more attention in surface reconstruction. It is identified as one of most accurate and stable methods to solve scattered data interpolation problems. Using this technique, an implicit surface is constructed by calculating the weights of a set of radial basis functions such they interpolate the given data points. From the pioneering work [7, 8] to recent researches, such as compactly-supported RBF [9, 10], fast RBF [11-13] and multi-scale RBF [14, 15], the established algorithms can generate more and more faithful models of real objects in last twenty years, unfortunately, most of them are not feasible for the approximations of unorganized point clouds containing noise and outliers. In this paper, we describe an implicit surface reconstruction algorithm for noise scattered point clouds with outliers. First, we define a smooth probability density kernel function reflecting the probability that a point p is a point on the surface S sampled by a noisy point cloud. A filtering procedure based on mean shift is used to move the points along the gradient of the kernel functions to the maximum probability positions. Second, we reconstruct a surface representation of “clean” point sets implicitly based on a combination of two well-known methods, RBF and partition of unity (PoU). The filtered domain of discrete points is divided into many subdomians by an adaptively error-controlled octree subdivision, on which local shape functions are constructed by RBFs. We blend local solutions together using a weighting sum of local subdomains. As you will see, our algorithm is robust and high quality.

2 Filtering 2.1 Covariance Analysis Before introducing our surface reconstruction algorithm, we describe how to perform eigenvalue decomposition of the covariance matrix based on the theory of principal component analysis (PCA) [24], through which the least-square fitting plane is defined to estimate the kernel-based density function. Given the set of input points Ω {pi}iє[1,L], pi є R3, the weighted covariance matrix C for a sample point pi є Ω is determined by



(

L C = ∑ p j − pi j =1

)(p j − pi )

T

(

⋅ Ψ p j − pi

)

h ,

(1)

where pi is the centroid of the neighborhood of pi, Ψ is a monotonically decreasing weight function, and h is the adaptive kernel size for the spatial sampling density. Consider the eigenvector problem

C ⋅ el = λl ⋅ el .

(2)

Since C is symmetric and positive semi-define, all eigenvalues λl are real-valued and the eigenvectors el form an orthogonal frame, corresponding to the principal components of the local neighborhood.

Implicit Surface Reconstruction from Scattered Point Data with Noise

59

Assuming λ0≤λ1≤λ2, it follows that the least square fitting plane H(p): ( p − pi ) ⋅ e0 = 0 through pi minimizes the sum of squared distances to the neighbors of pi. Thus e0 approximates the surface normal ni at pi, i.e., ni = e0. In other words, e1 and e2 span the tangent plane at pi.

2.2 Mean Shift Filtering Mean shift [16, 17] is one of the robust iterative algorithms in statistics. Using this algorithm, the samples are shifted to the most likely positions which are local maxima of kernel density function. It has been applied in many fields of image processing and visualization, such as tracing, image smoothing and filtering. In this paper, we use a nonparametric kernel density estimation scheme to estimate an unknown density function g(p) of input data. A smooth kernel density function g(p) is defined to reflect the probability that a point pє R3 is a point on the surface S sampled by a noisy point cloud Ω. Inspired by the previous work of Schall et al. [21], we measure the probability density function g(p) by considering the squared distance of p to the plane H(p) fitted to a spatial k-neighborhood of pi as

{

g ( p ) = ∑ gi ( p ) = ∑ Φ i ( p − p pro ) Gi ( p pro − pi ) 1 − ⎡⎣( p − pi ) ⋅ ni h ⎤⎦ L

L

i =1

i =1

2

},

(3)

where Φi and Gi are two monotonically decreasing weighting functions to measure the spatial distribution of point samples from spatial domain and range domain, which are more adaptive to the local geometry of the point model. The weight function could be either a Gaussian kernel or an Epanechnikov kernel. Here we choose Gaussian function e− x / 2σ . The ppro is an orthogonal projection of a certain sample point p on the least-square fitting plane. The positions p close to H(p) will be assigned with a higher probability than the positions being more distant. The simplest method to find the local maxima of (3) is to use a gradient-ascent process written as follows: 2

2

L

∇g ( p ) = ∑ ∇ g i ( p ) ≈ i =1

−2 L ∑ Φi ( p − ppro ) Gi (ppro − pi ) ⎣⎡( p − pi ) ⋅ ni ⎦⎤ ⋅ ni . h2 i =1

(4)

Thus the mean shift vectors are determined as ⎧L m(p) = p − ⎨∑ Φi ( p − ppro ) Gi ( ppro − pi ) ⎡⎣( p − pi ) ⋅ ni ⎤⎦ ⋅ ni ⎩ i =1

∑Φ (p − p ) G (p L

i =1

i

pro

i

pro

⎫ − pi ) ⎬ . ⎭

(5)

Combining equations (4) and (5) we get the resulting iterative equations of mean shift filtering

pij +1 = m(p ij ) , pio = pi ,

(6)

where j is the number of iteration. In our algorithm, g(p) satisfies the following conditions g ( p2 ) − g ( p1 ) >∇g ( p1 )( p2 −p1 )

∀p1 ≥ 0,∀p2 ≥ 0 ,

(7)

60

J. Yang et al.

thus g(p) is a convex function with finite stable points in the set U = {pi | g ( pi ) ≥ g ( p1i )}

resulting in the convergence of the series {pij , i = 1,..., L, j = 1, 2,...} . Experiments show that we stop iterative process if p ij +1 − pij ≤ 5 × 10−3 h is satisfied. Each sample usually converges in less than 8 iterations. Due to the clustering property of our method, groups of outliers usually converge to a set of single points sparsely distributed around the surface samples. These points can be characterized by a very low spatial sampling density compared to the surface samples. We use this criteria for the detection of outliers and remove them using a simple threshold.

3 Implicit Surface Reconstruction 3.1 Adaptive Space Subdivision In order to avoid solving a dense linear system, we subdivide the whole input points filtered by mean shift into slightly overlapping subdomains. An adaptive octree-based subdivision method introduced by Ohtake et al. [18] is used in our space partition. We define the local support radius R=α di for the cubic cells which are generated during the subdivision, di is the length of the main diagonal of the cell. Assume each cell should contain points between Tmin and Tmax. In our implementation, α=0.6, Tmin =20 and Tmax =40 has provided satisfying results. A local max-norm approximation error is estimated according to the Taubin distance [19], ε = max f ( pi ) / ∇f ( pi ) .

(8)

pi − ci < R

If the ε is greater than a user-specified threshold ε0, the cell is subdivided and a local neighborhood function fi is built for each leaf cell. 3.2 Estimating Local Shape Functions Given the set of N pairwise distinct points Ω={pi}iє[1,N], pi єR3, which is filtered by mean shift algorithm, and the set of corresponding values {vi}iє[1,N], vi єR, we want to find an interpolation f : R3→R such that f ( p i ) = vi .

(9)

We choose the f(p) to be a radial basis function of the form N

f ( p ) = η ( p ) + ∑ ω iϕ ( p − p i i =1

)

,

(10)

where η(p)= ζkηk(p) with {ηk(p)}kє[1,Q] is a basis in the 3D null space containing all real-value polynomials in 3 variables and of order at most m with Q = { 3m + 3 } depending on the choice of φ, φ is a basis function, ωi are the weights in real numbers, and | . | denotes the Euclidean norm.

Implicit Surface Reconstruction from Scattered Point Data with Noise

61

There are many popular basis functions φ for use: biharmonic φ(r) = r, triharmonic φ(r) = r3, multiquadric φ(r) = (r2+c2)1/2, Gaussian φ(r) = exp(-cr2), and thin-plate spline φ(r) = r2log(r), where r = |p-pi|. As we have an under-determined system with N+Q unknowns and N equations, socalled natural additional constraints for the coefficient ωi are added in order to ensure orthogonality, so that N

N

∑ ω η =∑ ω η i =1

i

1

i =1

i

2

=

=

N

∑ωη i =1

i

Q

=0 .

(11)

The equations (9), (10) and (11) may be written in matrix form as ⎛A ⎜ T ⎝η

η ⎞⎛ ω ⎞ ⎛ v ⎞ , ⎟⎜ ⎟ = ⎜ ⎟ 0 ⎠⎝ ζ ⎠ ⎝ 0 ⎠

(12)

where A=φ(|pi-pj|), i,j =1,…,N, η=ηk(pi), i=1,…,N, k=1,…,Q, ω=ωi, i=1,…,N and ζ=ζk, k=1,…,Q. Solving the linear system (14) determines ωi and ζk, hence the f(p).

Fig. 1. A set of locally defined functions are blent by the PoU method. The resulting function (solid curve) is constructed from four local functions (thick dashed curves) with their associated weight functions (dashed dotted curves).

3.3 Partition of Unity After suppressing high frequency noise and removing outliers, we divide the global domain Ω={pi}iє[1,N] into M lightly overlapping subdomains {Ωi}iє[1,M] with Ω ⊆ ∪ i Ωi using an octree-based space partition method. On this set of subdomains {Ωi}iє[1,M], we construct a partition of unity, i.e., a collection of non-negative functions {Λi}iє[1,M] with limited support and with ∑Λi=1 in the entire domain Ω. For each subdomain Ωi we construct a local reconstruction function fi based on RBF to interpolate the sampled points. As illustrated in Fig. 1, four local functions f1(p), f2(p), f3(p) and f4(p) are blended together by weight functions Λ1, Λ2, Λ3 and Λ4. The solid curve is the final reconstructed function. Now an approximation of a function f(p) defined on Ω is given by a combination of the local functions M

f (p ) = ∑ fi (p ) Λ i (p ) .

(13)

i =1

The blending function is obtained from any other set of smooth functions by a normalization procedure

62

J. Yang et al.

Λ i ( p ) = wi ( p )

∑ w (p ) j

.

(14)

j

The weight functions wi must be continuous at the boundary of the subdomains Ωi. Tobor et al. [15] suggested that the weight functions wi be defined as the composition of a distance function Di:Rn→[0,1], where Di(p)=1 at the boundary of Ωi and a decay function θ: [0,1]→[0,1], i.e. wi(p)= θ ◦ Di(p). More details about Di and θ can be found in Tobor’s paper. Table 1. Computational time measurements for mean shift filtering and RBF+PoU surface reconstructing with error bounded at 10-5. Timings are listed as minutes:seconds. model Pinput Pfilter Tfilter Toctree Trec

Bunny 362K 165K 9:07 0:02 0:39

Dragon head 485K 182K 13:26 0:04 0:51

(a)

(b)

Dragon 2.11M 784K 41:17 0:10 3:42

(c)

Fig. 2. Comparison of implicit surface reconstruction based on RBF methods. (a) Input noisy point set of Stanford bunny (362K). (b) Reconstruction with Carr’s method [11]. (c) Reconstruction with our method in this paper.

4 Applications and Results All results presented in this paper are performed on a 2.8GHz Intel Pentium4 PC with 512M of RAM running Windows XP. To visualize the resulting implicit surfaces, we used a pure point-based surface rendering algorithm such as [22] instead of traditionally rendering the implicit surfaces using a Marching Cubes algorithm [23], which inherently introduces heavy topological constraints. Table 1 presents computational time measurements for filtering and reconstructing of three scan models, bunny, dragon head and dragon, with user-specified error threshold 10-5 in this paper. In order to achieve good effects of denoising we choose a large number of k-neighborhood for the adaptive kernel computation, however, more timings of filtering are spent . In this paper, we set k=200. Note that the filtered points are less than input noisy points due to the clustering property of our method. In Fig. 2 two visual examples of the reconstruction by Carr’s method [11] and our algorithm are shown. Carr et al. use polyharmonic RBFs to reconstruct smooth,

Implicit Surface Reconstruction from Scattered Point Data with Noise

63

manifold surfaces from point cloud data and their work is considered as an excellent and successful research in this field. However, because of sensitivity to noise, the reconstructed model in the middle of Fig. 2 shows spurious surface sheets. The quality of the reconstruction is highly satisfactory, as be illustrated in the right of Fig. 2, since a mean shift operator is introduced to deal with noise in our algorithm. For the purpose of illustrating the influence of error thresholds on reconstruction accuracy and smoothness, we set two different error thresholds on the reconstruction of the scanned dragon model, as demonstrated by Fig. 3.

(a)

(b)

(c)

(d)

Fig. 3. Error threshold controls reconstruction accuracy and smoothness of the scanned dragon model consisting of 2.11M noisy points. (a) Reconstructing with error threshold at 8.4x10-4. (c) Reconstructing with error threshold at 2.1x10-5. (b) and (d) are close-ups of the rectangle areas of (a) and (c) respectively.

5 Conclusion and Future Work In this study, we have presented a robust method for implicit surface reconstruction from scattered point clouds with noise and outliers. Mean shift method filters the raw scanned data and then the PoU scheme blends the local shape functions defined by RBF to approximate the whole surface of real objects. We are also investigating various other directions of future work. First, we are trying to improve the space partition method. We think that the Volume-Surface Tree [20], an alternative hierarchical space subdivision scheme providing efficient and accurate surface-based hierarchical clustering via a combination of a global 3D decomposition at coarse subdivision levels, and a local 2D decomposition at fine levels near the surface may be useful. Second, we are planning to combine our method with some feature extraction procedures in order to adapt it for processing very incomplete data.

References 1. Weiss, V., Andor, L., Renner, G., Varady, T.: Advanced Surface Fitting Techniques. Computer Aided Geometric Design, 1 (2002) 19-42 2. Iglesias, A., Echevarría, G., Gálvez, A.: Functional Networks for B-spline Surface Reconstruction. Future Generation Computer Systems, 8 (2004) 1337-1353 3. Alexa, M., Behr, J., Cohen-Or, D., Fleishman, S., Levin D., Silva, C. T.: Point Set Surfaces. In: Proceedings of IEEE Visualization. San Diego, CA, USA, (2001) 21-28 4. Amenta, N., Kil, Y. J.: Defining Point-Set Surfaces. ACM Transactions on Graphics, 3 (2004) 264-270

64

J. Yang et al.

5. Levin, D.: Mesh-Independent Surface Interpolation. In: Geometric Modeling for Scientific Visualization, Spinger-Verlag, (2003) 37-49 6. Fleishman, S., Cohen-Or, D., Silva, C. T.: Robust Moving Least-Squares Fitting with Sharp Features. ACM Transactions on Graphics, 3 (2005) 544-552 7. Savchenko, V. V., Pasko, A., Okunev, O. G., Kunii, T. L.: Function Representation of Solids Reconstructed from Scattered Surface Points and Contours. Computer Graphics Forum, 4 (1995) 181-188 8. Turk, G., O’Brien, J.: Variational Implicit Surfaces. Technical Report GIT-GVU-99-15, Georgia Institute of Technology, (1998) 9. Wendland, H.: Piecewise Polynomial, Positive Definite and Compactly Supported Radial Functions of Minimal Degree. Advances in Computational Mathematics, (1995) 389-396 10. Morse, B. S., Yoo, T. S., Rheingans, P., Chen, D. T., Subramanian, K. R.: Interpolating Implicit Surfaces from Scattered Surface Data Using Compactly Supported Radial Basis Functions. In: Proceedings of Shape Modeling International, Genoa, Italy, (2001) 89-98 11. Carr, J. C., Beatson, R. K., Cherrie, J. B., Mitchell, T. J., Fright, W. R., McCallum, B. C., Evans, T. R.: Reconstruction and Representation of 3D Objects with Radial Basis Functions. In: Proceedings of ACM Siggraph 2001, Los Angeles, CA , USA, (2001) 67-76 12. Beatson, R. K.: Fast Evaluation of Radial Basis Functions: Methods for Two-Dimensional Polyharmonic Splines. IMA Journal of Numerical Analysis, 3 (1997) 343-372 13. Wu, X., Wang, M. Y., Xia, Q.: Implicit Fitting and Smoothing Using Radial Basis Functions with Partition of Unity. In: Proceedings of 9th International Computer-AidedDesign and Computer Graphics Conference, Hong Kong, China, (2005) 351-360 14. Ohtake, Y., Belyaev, A., Seidel, H. P.: Multi-scale Approach to 3D Scattered Data Interpolation with Compactly Supported Basis Functions. In: Proceedings of Shape Modeling International, Seoul, Korea, (2003) 153-161 15. Tobor, I., Reuter, P., Schlick, C.: Multi-scale Reconstruction of Implicit Surfaces with Attributes from Large Unorganized Point Sets. In: Proceedings of Shape Modeling International, Genova, Italy, (2004) 19-30 16. Comaniciu, D., Meer, P.: Mean Shift: A Robust Approach toward Feature Space Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5 (2002) 603-619 17. Cheng, Y. Z.: Mean Shift, Mode Seeking, and Clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8 (1995) 790-799 18. Ohtake, Y., Belyaev, A., Alexa, M., Turk, G., Seidel, H. P.: Multi-level Partition of Unity Implicits. ACM Transactions on Graphics, 3 (2003) 463-470 19. Taubin, G.: Estimation of Planar Curves, Surfaces and Nonplanar Space Curves Defined by Implicit Equations, with Applications to Edge and Range Image Segmentation. IEEE Transaction on Pattern Analysis and Machine Intelligence, 11 (1991) 1115-1138 20. Boubekeur, T., Heidrich, W., Granier, X., Schlick, C.: Volume-Surface Trees. Computer Graphics Forum, 3 (2006) 399-406 21. Schall, O., Belyaev, A., Seidel, H-P.: Robust Filtering of Noisy Scattered Point Data. In: IEEE Symposium on Point-Based Graphics, Stony Brook, New York, USA, (2005) 71-77 22. Rusinkiewicz, S., Levoy, M.: Qsplat: A Multiresolution Point Rendering System for Large Meshes. In: Proceedings of ACM Siggraph 2000, New Orleans, Louisiana, USA, (2000) 343-352 23. Lorensen, W. E., Cline, H. F.: Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Computer Graphics, 4 (1987) 163-169 24. Hoppe, H., DeRose, T., Duchamp, T., McDonald, J., Stuetzle, W.: Surface Reconstruction from Unorganized Points. In: Proceedings of ACM Siggraph’92, Chicago, Illinois, USA, (1992) 71-78

The Shannon Entropy-Based Node Placement for Enrichment and Simplification of Meshes Vladimir Savchenko1, Maria Savchenko2, Olga Egorova3, and Ichiro Hagiwara3 Hosei University, Tokyo, Japan [email protected] 2 InterLocus Inc.Tokyo, Japan [email protected] 3 Tokyo Institute of Technology, Tokyo, Japan [email protected], [email protected] 1

Abstract. In this paper, we present a novel simple method based on the idea of exploiting the Shannon entropy as a measure of the interinfluence relationships between neighboring nodes of a mesh to optimize node locations. The method can be used in a pre-processing stage for subsequent studies such as finite element analysis by providing better input parameters for these processes. Experimental results are included to demonstrate the functionality of our method. Keywords: Mesh enrichment, Shannon entropy, node placement.

1

Introduction

Construction of a geometric mesh from a given surface triangulation has been discussed in many papers (see [1] and references therein). Known approaches are guaranteed to pass through the original sample points that are important in computer aided design (CAD). However, results of triangulations drastically depend on uniformity and density of the sampled points as it can be seen in Fig.1. Surface remeshing has become very important today for CAD and computer graphics (CG) applications. Complex and detailed models can be generated by 3D scanners, and such models have found a wide range of applications in CG and CAD, particularly in reverse engineering. Surface remeshing is also very important for technologies related to engineering applications such as finite element analysis (FEA). Various automatic mesh generation tools are widely used for FEA. However, all of these tools may create distorted or ill-shaped elements, which can lead to inaccurate and unstable approximation. Thus, improvement of the mesh quality is an almost obligatory step for preprocessing of mesh data in FEA. Recently, sampled point clouds have received much attention in the CG community for visualization purposes (see [2], [3]) and CAD applications (see [4], [5], [6]). A noise-resistant algorithm [6] for reconstructing a watertight surface from point cloud data presented by Kolluri et al. ignores undersampled regions; nevertheless, it seems to us that some examples show that undersampled areas need an improvement by surface retouching or enrichment algorithms. In some Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 65–72, 2007. c Springer-Verlag Berlin Heidelberg 2007 

66

V. Savchenko et al.

applications, it is useful to have various, for instance, simpler versions of original complex models according to the requirements of the applications, especially, in FEA. In addition to the deterioration in the accuracy of calculations, speed may be sacrificed in some applications. Simplification of a geometric mesh considered here involves constructing a mesh element which is optimized to improve the elements shape quality using an aspect ratio (AR) as a measure of the quality.

(a)

(b)

Fig. 1. Surface reconstruction of a ”technical data set”. (a) Cloud of points (4100 scattered points are used). (b)Triangulation produced by Delaunay-based method (N triangular elements: 7991, N points: 4100).

In this paper, we present an attempt of enrichment of mesh vertices according to AR-based entropy which is the analog of the Shannon entropy [7]. Further it is called A-entropy. We progressively adapt the new coming points by performing elementary interpolation operations proposed by Shepard [8] (see also [9] for more references) for generating the new point instances until an important function If (in our case, a scalar which specifies the ideal sampling density) matches some user-specified values. To optimize node coordinates during simplification process (edge collapsing), A-entropy is also implemented. Recently, a wide scope of papers addressed the problem of remeshing of triangulated surface meshes, see, for instance, [10], [11] and references therein where surface remeshing based on surface parameterization and subsequent lifting of height data were applied. However, the main assumption used is that geometric details are captured accurately in the given model. Nevertheless, as it can be seen from the Darmstadt benchmark model (technical data set shown in Fig. 1), a laser scanner often performs non-uniform samples that leads to under-sampling or the mesh may have holes corresponding to deficiencies in point data. In theory, the problem of surface completion does not have a solution when the plane of the desirable triangulation is not planar; and, especially, begins to go wrong when the plane of triangulation is orthogonal to features in the hole boundary (so called, crenellations features). See a good discussion of the problem in [12]. Let us emphasize that our approach is different from methods related to reconstruction of surfaces from scattered data by interpolation methods based, for instance, on minimum-energy properties (see, for example, [13]). In our case, an

The Shannon Entropy-Based Node Placement

67

approximation of the original surface (triangular mesh) is given. In some applications, it is important to preserve the initial mesh topology. Thus, our goal is to insert new points in domains where the If function does not satisfy the user-specified value. The main contribution of the paper is a novel algorithm of a vertex placement which is discussed in details in Section 2.

2

Vertex Placement Algorithm

The approach is extremely simple and, in theory, is user dependent. In an analogy to a hole filling, the user defines an area, where enrichment may be done, that is, the user selects a processing area with no crenellations. In practice, all object surface can be considered as the processing area (as it has been done in the example shown in Fig. 1).

Fig. 2. Scheme of a new point generation. p1 , p2 , pi are points of the neighborhood. The dashed line is a bisector of the ”empty” sector, g is the generated point.

After that the algorithm works as follows: 1. Define a radius of sampling Rs as an analog of the point density; for instance, for the ”technical data set” the radius equal to 1. It can be done automatically by calculating average point density. 2. For each point of the user-defined domain, select K nearest points p that are in Rs neighborhood. If K is less (or equal) than the user-predefined number of the neighborhood points (in our experiments - 6) and the maximum angle of the ”empty” area is larger (or equal) the user-predefined angle (in our experiments - 900 ), generate a new point g with the initial guess provided by a bisector of the ”empty” sector as shown in Fig. 2. 3. Select a new neighborhood of the point g; it can be slightly different from the initial set of points. This is done in the tangent plane (projective plane) defined by neighborhood points. 4. Perform a local Delaunay triangulation. 5. Find points forming a set of triangles with the common node g (a star) as shown in Fig. 3(a). Calculate the new placement of the center of the star g using technique described below (Fig. 3(b)).

68

V. Savchenko et al.

6. And on the lifting stage, calculate the local z-coordinates of g by Shepard interpolation. In our implementation, we use the compactly supported radial basis function [14] as the weight function.

(a)

(b)

Fig. 3. (a) An initial star. (b) The final star.

The key idea of the fifth step is to progressively adapt the newly created points throw a few iterations. That is, an area with low sampling density will be filled in accordance with points generated on the previous steps. In order to obtain a ”good” set of the new (approximated) points coordinates, we need a measure of a ”goodness” of triangulations arising from randomly coming points. It is natural to use a mesh quality parameter, AR of the elements of a star, for such a measure. In the case of a triangular mesh, AR can be defined as a ratio of the maximum edge length to the minimum edge length of an element. Nevertheless, according to our experiments it is much better to use an information Mi (Mi is the AR of the i-th triangle of the star) associated with respect to a point g (Fig. 3) of the star in an analogy with the Shannon entropy [8], which defines the uncertainty of a random variable, and can be a natural measure for the criterion used in the enrichment algorithm. Shannon defined the entropy of an ensemble of messages: if there are N possible messages that can be sent in one package, and message m is being transmmited with probability pm , then the entropy is as follows S=−

N 

pm log (pm ) .

(1)

1

Intuitively, we can use AR-based entropy, with respect to the point g as follows S=−

N 

Mi /Mt log (Mi /Mt ) ,

(2)

i=0

where Mt is the summarized AR value of a star, N is the number of faces of the star. From the statistical point of view, a strict definition of the Shannon entropy for a mesh, which we denoted as A-entropy and used in our algorithm, is provided as follows: consider discrete random variable ξ with distribution:

The Shannon Entropy-Based Node Placement



x1 x2 ... xn p1 p2 ... pn

69

 ()

where probabilities pi =P{ξ=xi }. Then divide an interval 0 ≤ x < 1 into such intervals Δi that the length of Δi equals to pi . Random variable ξ is defined as ξ = xi , when γ ∈ Δi has distribution (). Suppose we have a set of empirically received numbers γ1 = a1 ,...,γn = an written in its increasing order, where ai is the AR of the i-th element of the neighborhood with the point g as a center. Let these numbers define a division of an interval a1 ≤ x < an into Δi = ai - ai−1 . In our case, the parameter ai has its minimal value equal to 1, which is not necessarily achieved in given sampling data. Constructing the one-to-one correspondence between 1 ≤ x < an and 0 ≤ x < 1 , the following probabilities can be written: p1 =

a1 − 1 a2 − a1 an − an−1 , p2 = , pn = ... an − 1 an − 1 an − 1

Thus, we can define the random variable with the distribution as follows   a1 a2 ... an . p1 p2 ... pn Its probability values are used in formula (3) for A-entropy: A=−

N 

pi log (pi ) , pi =

1

ai − ai−1 , p0 = 1. an − 1

(3)

The value A of A-entropy depends on the coordinates of the center of the star (point g in Fig. 3). Thus, the problem of maximization of the value A is reduced to the problem of finding the new coordinates of this center (Fig. 3(b)) and is considered as the optimization problem. For solving this optimization problem, we use the downhill simplex method of Nelder and Mead [15].

3

Experimental Results

In practice, as it can be seen in Fig. 4, implementation of the algorithm discussed above leads to a reasonable surface reconstruction of areas with initially low sampling density (see Fig. 1). The number of scattered points in the initial model is 4100, after enrichment the number of points was increased up to 12114. For decreasing the number of points we simplify this model and the final number of points is 5261. Our triangular mesh simplification method uses predictor-corrector steps for predicting candidates for edge collapsing according to a bending energy [16] with the consequent correction of the point placement in simplified mesh. Let us notice that the main idea of our simplification approach is to provide minimal local surface deformation during an edge collapse operation.

70

V. Savchenko et al.

(a)

(b)

(c)

Fig. 4. The mechanical data set. (a) Mesh after enrichment. (b) Mesh after simplification. (c) Shaded image of the final surface.

At each iteration step: - candidate points for an edge collapse are defined according to a local decimation cost of points belonging to a shaped polygon. - after all candidates have been selected, we produce a contraction of the edge with choosing an optimal vertex position by using A-entropy according to the fifth step of the algorithm (see Section 2). To detail the features of the proposed point placement scheme, Fig. 5 presents results of applying well known or even classical surface simplification algorithms (tools can be found in [17]) and our method. We show fragment of a ”Horse” model (the initial AR value is equal to 1.74; here and further, the average value of the AR is used) after the mesh simplification produced by the different simplification techniques.

(a)

(b)

(c)

(d)

Fig. 5. Mesh fragments of the ”Horse” model after simplification (∼13% of original elements) by using: (a) Progressive simplification method; (b) Method based on a global error bound; (c) Method based on a quadric error metric; (d) Our method

The volume difference between the initial model and simplified one by our technique is 0.8%; the final AR value is equal to 1.5. The global error bound method demonstrates the worst results; the volume difference is 1.3%, the final AR value is equal to 2.25. At a glance of the model visualization and the volume preservation, the best method, without any doubt, is the method based on the quadric error metric, see [18]. However, there is a tradeoff between attaining a

The Shannon Entropy-Based Node Placement

71

high quality surface reconstruction and minimization of AR. As a result, the final AR value is equal to 2 and many elongated and skinny triangles can be observed in the mesh.

4

Concluding Remarks

In this paper, we introduce the notion of AR-based entropy (A-entropy) which is the analog of the Snannon entropy. We consider the enrichment technique and the technique for improving the mesh quality which are based on this notion. The mesh quality improvement in presented simplification technique can be compared with smoothing methods based on the averaging of the coordinates, such as Laplacian [19] or an angle based method of Zhou and Shimada [20]. These methods have an intrinsic drawback such as a possibility of creating inverted triangles. In some non-convex domains, nodes can be pulled outside the boundary. Implementation of the entropy-based placement in simplification algorithm decreases a possibility that a predicted point does not create an inverted triangle, but does not guarantee that such event does not occur at all. However, producing operations in the tangent plane allows sufficiently easy avoiding creation of inverted triangles. Interpolation based on the Shepard method produces excessive bumps. In fact, it is a well known feature of the original Shepard method. More sophisticated local interpolation schemes such as [21] and others can be implemented to control the quality of interpolation. Matters related to feature preserving shape interpolation have to be considered in the future. We have mentioned that it might be natural to use AR (the mesh quality parameter) of the elements of a star as a measure for providing a reasonable vertex placement. Nevertheless, we would like to emphasize that according to our experiments, in many cases it does not lead at all to a well-founded estimate of a point g. It might be a rational approach to use the Shannon entropy as a measure of the inter-influence relationships between neighboring nodes of a star to calculate optimal positions of vertices. We can see in Fig. 5 that shapes of mesh elements after implementation of our method differ significantly from results of applying other simplification methods. Meshes in Fig. 5(a, b, c) are more suitable for visualization than for numerical calculations. Improvement of meshes of a very low initial quality, for instance, the ”Horse” model simplified by the global error bound method, takes many iteration steps to attain AR value close to our result and after implementation of Laplacian smoothing to the model shown in Fig. 5(b) the shape of the model is strongly deformed. After implementation of Laplacian smoothing (300 iteration steps) to the ”Horse” model, simplified by the quadric error metric method, AR and volume difference between the original model and improved one become 1.6 and 5.2%, correspondingly. Examples demonstrated above show that the mesh after applying our method is closer to a computational mesh and can be used for FEA in any field of study dealing with isotropic meshes.

72

V. Savchenko et al.

References 1. Frey P. J.: About Surface Remeshing. Proc.of the 9th Int.Mesh Roundtable (2000) 123-136 2. Alexa, M., Behr, J.,Cohen-Or, D., Fleishman, S., Levin, D., Silvia, C. T.: Point Set Surfaces. Proc. of IEEE Visualization 2001 (2002) 21-23 3. Pauly, M., Gross, M., Kobbelt, L.: Efficient Simplification of Point-Sampled Surfaces. Proc. of IEEE Visualization 2002(2002) 163-170 4. Hoppe, H., DeRose, T., Duchamp, T., McDonald, J.,Stuetzle, W.: Surface Reconstruction from Unorganized Points. Proceedings of SIGGRAPH 92 (1992) 71-78 5. Amenta, N., Choi, S., Kolluri, R.: The Powercrust. Proc. of the 6th ACM Symposium on Solid Modeling and Applications (1980) 609-633 6. Kolluri, R., Shewchuk, J.R., O’Brien, J.F.: Spectral Surface Reconstruction From Noisy Point Clouds. Symposium on Geometry Processing (2004) 11-21 7. Blahut, R.E.: Principles and Practice of Information Theory. Addison-Wisley (1987) 8. Shepard, D.: A Two-Dimensional Interpolation Function for Irregularly Spaced Data. Proc. of the 23th Nat. Conf. of the ACM (1968) 517-523 9. Franke, R., Nielson, G.,: Smooth Interpolation of Large Sets of Scattered Data. Journal of Numerical Methods in Engineering 15 (1980) 1691-1704 10. Alliez, P., de Verdiere, E.C., Devillers, O., Isenburg, M.: Isotropic Surface Remeshing. Proc.of Shape Modeling International (2003)49-58 11. Alliez, P., Cohen-Steiner, D., Devillers, O., Levy, B., Desburn, M.: Anisotropic Polygonal Remeshing. Inria Preprint 4808 (2003) 12. Liepa, P.: Filling Holes in Meshes. Proc. of 2003 Eurographics/ACM SIGGRAPH symp.on Geometry processing 43 200-205 13. Carr, J.C., Mitchell, T.J., Beatson, R.K., Cherrie, J.B., Fright, W.R., McCallumn, B.C., Evans, T.R.: Filling Holes in Meshes. Proc.of SIGGRAPH01 (2001) 67-76 14. Wendland, H.: Piecewise Polynomial, Positive Defined and Compactly Supported Radial Functions of Minimal Degree. AICM 4 (1995) 389-396 15. Nelder, J.A., Mead, R.: A simplex Method for Function Minimization. Computer J. 7 (1965) 308-313 16. Bookstein, F.L.: Morphometric Tools for Landmarks Data. Cambridge University Press (1991) Computer J. 7 (1965) 308-313 17. Schroeder, W., Martin, K., Lorensen,B.: The Visualization Toolkit. Ed.2 Prentice Hall Inc. (1998) 18. Garland, M.: A Multiresolution Modeling: Survey and Future Opportunities. Proc. of EUROGRAPHICS, State of the Art Reports (1999) 19. Bossen, F.J., Heckbert, P.S.: A Pliant Method for Anisotropic Mesh Generation. Proc. of the 5th International Meshing Roundtable (1996) 63-74 20. Zhou, T., Shimada, K.: An Angle-Based Approach to Two-dimensional Mesh Smoothing. Proc.of the 9th International Meshing Roundtable (2000) 373-384 21. Krysl, P., Belytchko, T.: An Efficient Linear-precision Partition of Unity Basis for Unstructured Meshless Methods. Communications in Numerical Methods in Engineering 16 (2000) 239-255

Parameterization of 3D Surface Patches by Straightest Distances Sungyeol Lee and Haeyoung Lee Hongik University, Dept. of Computer Engineering, 72-1 Sangsoodong Mapogu, Seoul Korea 121-791 {leesy, leeh}@cs.hongik.ac.kr

Abstract. In this paper, we introduce a new piecewise linear parameterization of 3D surface patches which provides a basis for texture mapping, morphing, remeshing, and geometry imaging. To lower distortion when flatting a 3D surface patch, we propose a new method to locally calculate straightest distances with cutting planes. Our new and simple technique demonstrates competitive results to the current leading parameterizations and will help many applications that require one-to-one mapping.

1

Introduction

A 3D mesh parameterization provides a piecewise linear mapping between a 3D surface patch and an isomorphic 2D patch. It is a widely used or required operation for texture-mapping, remeshing, morphing or geometry imaging. Guaranteed one-to-one mappings that only requires a linear solver have been researched and many algorithms [4,5,11,8,10] were proposed. To reduce inevitable distortions when flattening, a whole object is usually partitioned into several genus 0 surface patches. Non-linear techniques [19] are also presented with good results in some applications but they require more computational time than linear methods. Geodesics on meshes have been used in various graphics applications such as parameterization [10], remeshing [14,20], mesh segmentation [20,6], and simulations of natural phenomena [16,9]. Geodesics provide a distance metric between vertices on meshes while the Euclidean metric can not. Straightest geodesic path on meshes was introduced by Polthier and Schmies [15] and used for parameterization by [10]. However their straightest geodesics may not be defined between a source and a destination and require special handling of the swallow tails created by conjugate vertices [16] and triangles with obtuse angles [9]. In this paper we present a new linear parameterization of 3D surface patches. Our parameterization is improved upon [10] by presenting a new way to locally calculate straightest geodesics. Our method demonstrates visually and statistically competitive results to the current leading methods [5,10] as shown in Figure 1, 3, 5, and Table 1. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 73–80, 2007. c Springer-Verlag Berlin Heidelberg 2007 

74

S. Lee and H. Lee

(a) By Floater’s (dist. 1.26)

(b) By Ours (dist. 1.20)

(c) By Ours with a fixed

(d) By Ours with a measured

Fig. 1. Comparisons with texture-mapped models, Hat and Nefertiti: (a) is resulted by Floater’s [5] with a distortion of 1.26. (b) is by our new parameterization with a distortion of 1.20, less than by Floater’s. The distortion is measured by the texture stretch metric [19]. (c) is by ours with a fixed boundary and (d) is also by ours with a measured boundary.We can see much less distortion in (d) than (c).

1.1

Related Work

Parameterization. There has been an increased need for a parameterization for texture-mapping, remeshing, morphing or geometry imaging. Many piecewise linear parameterization algorithms [4,5,11,8,10] were proposed. Generally the first step for parameterization is mapping boundary vertices to a fixed position. Usually the boundary is mapped to a square, a circle, or any convex shape while respecting the 3D-to-2D length ratio between adjacent boundary vertices. The positions of the interior vertices in the parameter space are then found by solving a linear system. The linear system is generated with coefficients in a convex combination of 1-ring neighbors for each interior vertex. These coefficients characterize geometric properties such as angle and/or area preserving. Geodesic Paths. There are several algorithms for geodesic computations on meshes, mostly based on shortest paths [13,1,7] and have been used for remeshing and parameterization [20,14]. However, still special processing for triangles with obtuse angles is required. A detailed overview of this approach can be seen in [12]. Another approach is to compute the straightest geodesic path. Polthier and Schmies first introduced an algorithm for the straightest geodesic path on a mesh [15]. Their straightest geodesic path is uniquely defined with the initial condition i.e., a source vertex and direction but not with boundary conditions i.e., a source and a destination. A parameterization by straightest geodesics was first introduced in [10]. They used locally calculated straightest geodesic distances for a piecewise linear parameterization. Our parameterization is improved upon [10] by presenting a new way to calculate straightest geodesics.

2

Our Parameterization by Straightest Distances

A 3D mesh parameterization provides a piecewise linear mapping between a 3D surface patch and an isomorphic 2D patch. Generally the piecewise linear

Parameterization of 3D Surface Patches by Straightest Distances

75

parameterization is accomplished as follows: for every interior vertex Vi of a mesh, a linear relation between the (ui , vi ) coordinates of this point and the (uj , vj ) coordinates of its 1-ring neighbors {Vj }j∈N (i) , is set of the form:  aij (Uj − Ui ) = 0 (1) j∈N (i)

where Ui = (ui , vi ) are the coordinates of vertex Vi in the parameter space, and aij are the non-negative coefficients of matrix A. The boundary vertices are assigned to a circle, or any convex shape while respecting the 3D-to-2D length ratio between adjacent boundary vertices. The parameterization is then found by solving the resulting linear system AU = B. A is sparse because each line in the matrix A contains only a few non-zero elements (as many as the number of its neighbors). A preconditioned bi-conjugate gradient (PBCG) method [17] is used to iteratively solve this sparse linear system. As long as the boundary vertices are mapped onto a convex shape, the resulting mapping is guaranteed to be one-to-one. The core of this shape-preserving parameterization is how to determine non-negative coefficients aij . In this paper, we propose a new algorithm to determine these coefficients. 2.1

Our Local Straightest Distances

The core of this piecewise linear parameterization is finding nonnegative coefficients aij in the equation 1. Our new parameterization proposes to determine these coefficients by using locally straightest paths and distances with local cutting planes. The work by Lee et. al. [10] uses local straightest geodesics by Polthier and Schmies’s [15] for these coefficients, however the tangents of the straightest geodesics by this previous method are determined by gaussian curvatures at vertices and may not be intuitively straightest especially when the gaussian curvature is not equal to 2π. In Figure 2, Vps is determined by having the same left and right angle at Vi by [10], while Vour is determined intuitively straightest by our local cutting plane. Our new method for local straightest paths and distances is determined as follows. As shown in Figure 2, a base plane B is created locally at each interior vertex. To preserve shape better, the normal N ormalB of the base planeB is calculated by area-weighted averaging of neighboring face normals of Vi as shown in equation 2 and normalized later.  wj N ormalj (2) N ormalB = j∈N (i)

In this way, we found that the distortion is lower than a simple averaged normal of neighboring faces. A local cutting plane P passing with Vi , Vj is also calculated. Two planes intersect in a line as long as they are not parallel. Our cutting plane P pierces a neighboring face (for example j-th neighboring face) on the mesh. Therefore there is a line segment which is the straightest path by

76

S. Lee and H. Lee

P

P Vi

B

Vi

Vj

Vj

Vj

Vj Vi

Vk

Vj'

Vl Vk

Vi Vj'

Vl

Vk Vps

Vour

Vl

Vl Vk

Vps

Vour

Fig. 2. Our new local straightest path: For each interior vertex Vi , a local base B and a cutting plane P with Vi , Vj is created. A local straightest path is computed by cutting the face Vi Vk Vl with P. The intersection Vj  is computed on the edge Vk Vl and connected to Vi to form a local straightest path. Vps is determined by the Polthier and Schimes’s [15] and Vour is determined by our new method.

Fig. 3. Results by our new parameterization: models are Nefertiti, Face, Venus, Man, Mountain from the left to the right

our method. There may be multiple line intersections where the plane P may pierce multiple neighboring faces. As a future work, we will explore how to select a line segment. A local straightest path is computed by intersecting the face Vi Vk Vl and the cutting plane P. The tangent a for this intersecting line segment Vj Vj  can be easily calculated from the normal N ormalj of the face Vi Vk Vl and the normal N ormalp of the cutting plane P as follows: a = N ormalj XN ormalc

(3)

Then, the intersection vertex Vj  is computed on the edge Vk Vl and connected to Vi for the local straightest path Vj Vi Vj  . Finally barycentric coordinates for

Parameterization of 3D Surface Patches by Straightest Distances

77

the weights of Vj , Vk , Vl are computed, summed, normalized and then used to fill up the matrix A. Figure 3 shows the results of our new parameterization. 2.2

Discussion

Floater’s [5] is considered as the widely used parameterization and LTD’s [10] also used a straightest geodesic path algorithm by [15]. Therefore we compare our method to the two existing parameterizations. The visual results achieved by our new parameterization are shown in Figure 3. The distortion with the texture-stretch metric in [19] is also measured and shown in Table 1. Notice that our parameterization produces competitive results to the current leading linear parameterizations. With measured boundary The previous algorithms and the distortion metric (L2 -norm, the mean stretch over all directions) are all implemented by us.

3

Measured Boundary for Parameterization

As shown in Figure 4 (b) and (c), and the 1st and 2nd figures in Figure 3, high distortion always occurs near the boundary. To reduce this high distortion, we attempt to derive a boundary by our straightest geodesic path algorithm. An interior source vertex S can be specified by a user or calculated as a center vertex of the mesh from the boundary vertices. A virtual edge is defined as an edge between S and a vertex on the boundary. Straightest paths and distances of virtual edges to every vertex on the boundary will be measured as follows: 1. Make virtual edges connecting from S to every boundary vertex of the mesh. 2. Map each virtual edge onto the base plane B by a polar map, which preserves angles between virtual edges such as [4]. The normal of the base plane B is calculated as previously mentioned in 2. 3. Measure the straightest distance for each virtual edge on B from S to each boundary vertices with corresponding cutting planes. 4. Position each boundary vertex at the corresponding distance from S on B. 5. If the resulted boundary is non-convex shaped, change it to a convex. Find the edges having minimum angle with the consecutive edge (i.e., concaved part of the boundary) and move the boundary vertex to form a convex. In the linear system AU = B, the boundary vertices in B is simply set to the measured position (ui , vi ) and (0, 0) for inner vertices. Then PBCG as mentioned in 2 is used to find positions in the parameterized space. Figure 4 (d) and (e) clearly shows the utility of our straightest geodesic paths with the simple models Testplane on the top and Testplane2 on the bottom. With a circular boundary, previous parameterizations [5,10] produce the same results in (b) for two different models. In (c), there is also a high distortion in the texture-mapping by using (b). Our straightest path algorithm contributes to deriving two distinct measured boundaries and results in very low distortion in (d) and much better texture-mapping in (e).

78

S. Lee and H. Lee

(a) Models

(b) Circular boundary

(c) Textured by (b)

(d) Measured boundary

(e) Textured by (d)

Fig. 4. Comparisons between parameterizations with a fixed boundary and a measured boundary by our new method: With a circular boundary, previous parameterizations [5,10] produce the same results in (b) for two different models in (a). Notice in (c) that there are a lot of distortion in the texture-mapping by the results in (b). Our straightest path algorithm contributes to creating a measured boundary to reduce distortion by distinct results in (d) and much better texture-mapping in (e).

Fig. 5. Results by our new parameterization with different boundaries. Models are Face in the two left and Venus on the two right columns. The tip of the nose on each model is chosen as S.

Results with more complex models are demonstrated in Figure 5. Notice that there is always a high level of distortion near the fixed boundary but a low level of distortion near the measured boundary by using our method. The straightest

Parameterization of 3D Surface Patches by Straightest Distances

79

distances to the boundary vertices are actually dependent on the selection on the source vertex S. We simply use a vertex centered on the mesh from the boundary as for the source S. As a future work, we will explore how to select the vertex S.

4

Results

The visual results by our method are shown in Figure 1, 3, and 5. The statistical results comparing our parameterization with other methods are listed in Table 1. Notice that visually and statistically our methods produce competitive results than the previous methods. Table 1. Comparisons of distortion measured by the texture stretch metric [19]: The boundary is fixed to a circle. Combined with measured boundaries by our straightest path algorithm, our new parameterization in the 6th column produces better results than the current leading methods. Models Nefertiti Man Face Venus Mountain

No. of Floater’s [5] LTD’s [10] Our Param. Our Param. Vertices fixed bound. fixed bound. fixed bound. measured bound. 299 1208 1547 2024 2500

1.165 1.244 1.223 2.159 1.519

1.165 1.241 1.222 2.162 1.552

1.164 1.240 1.221 2.168 1.550

1.146 1.226 1.334 1.263 1.119

The performance complexity of our algorithm is all linear to the number of vertices, i.e., O(V ). The longest processing time among our models in Table 1 is 0.53 sec, required for the Mountain having the highest number of vertices. The processing time is measured on a laptop with a Pentium M 2.0GHz 1GB RAM.

5

Conclusion and Future Work

In this paper, we introduce a new linear parameterization by locally straightest distances. We also demonstrate the utility of our straightest path algorithm to derive a measured boundary for parameterizations with better results. Future work will extend the utility of our straightest path algorithm by applying it to other mesh processing techniques such as remeshing, subdivision, or simplification.

Acknowledgement This work was supported by grant No. R01-2005-000-10120-0 from Korea Science and Engineering Foundation in Ministry of Science & Technology.

80

S. Lee and H. Lee

References 1. Chen J., Han Y.: “Shortest Paths on a Polyhedron; Part I: Computing Shortest Paths”, Int. J. Comp. Geom. & Appl. 6(2), 1996. 2. Desbrun M., Meyer M., Alliez P.: “Intrinsic Parameterizations of Surface Meshes”, Eurographics 2002 Conference Proceeding, 2002. 3. Floater M., Gotsman C.: “How To Morph Tilings Injectively”, J. Comp. Appl. Math., 1999. 4. Floater M.: “Parametrization and smooth approximation of surface triangulations”, Computer Aided Geometric Design, 1997. 5. Floater M.: “Mean Value Coordinates”, Comput. Aided Geom. Des., 2003. 6. Funkhouser T., Kazhdan M., Shilane P., Min P.,Kiefer W., Tal A., Rusinkiewicz S., Dobkin D.: “Modeling by example”, ACM Transactions on Graphics, 2004. 7. Kimmel R., Sethian J.A.: “Computing Geodesic Paths on Manifolds”, Proc. Natl. Acad. Sci. USA Vol.95 1998, 1998. 8. Lee Y., Kim H., Lee S.: “Mesh Parameterization with a Virtual Boundary”, Computer and Graphics 26 (2002), 2002. 9. Lee H., Kim L., Meyer M., Desbrun M.: “Meshes on Fire”, Computer Animation and Simulation 2001, Eurographics, 2001. 10. Lee H., Tong Y. Desbrun M.: “Geodesics-Based One-to-One Parameterization of 3D Triangle Meshes”, IEEE Multimedia January/March (Vol. 12 No. 1), 2005. 11. Meyer M., Lee H., Barr A., Desbrun M.: “Generalized Barycentric Coordinates to Irregular N-gons”, Journal of Graphics Tools, 2002. 12. Mitchell J.S.B.: “Geometric Shortest Paths and network optimization”, In Handbook of Computational Geometry, J.-R. Sack and J. Urrutia, Eds. Elsevier Science 2000. 13. Mitchell J.S.B., Mount D.M., Papadimitriou C.H.: “The Discrete Geodesic Problem”, SIAM J. of Computing 16(4), 1987. 14. Peyr´e G., Cohen L.: “Geodesic Re-meshing and Parameterization Using Front Propagation”, In Proceedings of VLSM’03, 2003. 15. Polthier K., Schmies M.: “Straightest Geodesics on Polyhedral Surfaces”, Mathematical Visualization, 1998. 16. Polthier K., Schmies M.: “Geodesic Flow on Polyhedral Surfaces”, Proceedings of Eurographics-IEEE Symposium on Scientific Visualization ’99, 1999. 17. Press W., Teuklosky S., Vetterling W., Flannery B.: “Numerical Recipes in C, second edition”, Cambridge University Press, New York, USA, 1992. 18. Riken T., Suzuki H.: “Approximate Shortest Path on a Polyhedral Surface Based on Selective Refinement of the Discrete Graph and Its Applications”, Geometric Modeling and Processing 2000 (Hongkong), 2000. 19. Sander P.V., Snyder J., Gortler S.J., Hoppe H.: “Texture Mapping Progressive Meshes”, Proceedings of SIGGRAPH 2001, 2001. 20. Sifri O., Sheffer A., Gotsman C. : “Geodesic-based Surface Remeshing”, In Proceedings of 12th Intnl. Meshing Roundtable, 2003.

Facial Expression Recognition Based on Emotion Dimensions on Manifold Learning Young-suk Shin School of Information and telecommunication Engineering, Chosun University, #375 Seosuk-dong, Dong-gu, Gwangju, 501-759, Korea [email protected]

Abstract. This paper presents a new approach method to recognize facial expressions in various internal states using manifold learning (ML). The manifold learning of facial expressions reflects the local features of facial deformations such as concavities and protrusions. We developed a representation of facial expression images based on manifold learning for feature extraction of facial expressions. First, we propose a zero-phase whitening step for illuminationinvariant images. Second, facial expression representation from locally linear embedding (LLE) was developed. Finally, classification of facial expressions in emotion dimensions was generated on two dimensional structure of emotion with pleasure/displeasure dimension and arousal/sleep dimension. The proposed system maps facial expressions in various internal states into the embedding space described by LLE. We explore locally linear embedding space as a facial expression space in continuous dimension of emotion.

1 Introduction A challenging study in automatic facial expression recognition is to detect the change of facial expressions in various internal states. Facial expressions are continuous because the expression image varies smoothly as the expression is changed. The variability of expression images can be represented as subtleties of manifolds such as concavities and protrusions in the image space. Thus automatic facial expression recognition has to be detected subtleties of manifolds in the expression image space, and it is also required continuous dimensions of emotion because the expression images consist of several other emotions and many combinations of emotions. The dimensions of emotion can overcome the problem of discrete recognition space because the discrete emotions can be treated as regions in a continuous space. The two most common dimensions are “arousal” (calm/excited), and “valence” (negative/positive). Russell who argued that the dimensions of emotion can be applied to emotion recognition [1]. Peter Lang has assembled an international archives of imagery rated by arousal and valence with image content [2]. To recognize facial expressions in various internal states, we worked with dimensions of emotion instead of basic emotions or discrete emotion categories. The dimensions of emotion proposed are pleasure/displeasure dimension and arousal/sleep dimension. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 81–88, 2007. © Springer-Verlag Berlin Heidelberg 2007

82

Y.-s. Shin

Many studies [3, 4, 5, 6, 7] for representing facial expression images have been proposed such as Optic flow, EMG(electromyography), Geometric tracking method, Gabor representation, PCA (Principal Component Analysis) and ICA (Independent Component Analysis). At recently study, Seung and Lee [8] proposed generating image variability as low-dimensional manifolds embedded in image space. Roweis and Saul [9] showed that locally linear embedding algorithm is able to learn the global structure of nonlinear manifolds, such as the pose and expression of an individual’s faces. But there have been no reports about how to contribute the intrinsic features of the manifold based on various internal states on facial expression recognition. We explore the global structure of nonlinear manifolds on various internal states using locally linear embedding algorithm. This paper developed a representation of facial expression images on locally linear embedding for feature extraction of various internal states. This representation consists of two steps in section 3. Firstly, we present a zero-phase whitening step for illumination-invariant images. Secondly, facial expression representation from locally linear embedding was developed. A classification of facial expressions in various internal states was presented on emotion dimension having pleasure/displeasure dimension and arousal/sleep dimension using 1nearest neighborhood. Finally, we discuss locally linear embedding space and facial expression space on dimensions of emotion.

2 Database on Dimensions of Emotion The face expression images used for this research were a subset of the Korean facial expression database based on dimension model of emotion [10]. The dimension model explains that the emotion states are not independent one another and related to each other in a systematic way. This model was proposed by Russell [1]. The dimension model also has cultural universals and it was proved by Osgood, May & Morrison and Russell, Lewicka & Niit [11, 12]. The data set with dimension structure of emotion contained 498 images, 3 females and 3 males, each image using 640 by 480 pixels. Expressions were divided into two dimensions according to the study of internal states through the semantic analysis of words related with emotion by Kim et al. [13] using 83 expressive words. Two dimensions of emotion are Pleasure/Displeasure dimension and Arousal/Sleep dimension. Each expressor of females and males posed 83 internal emotional state expressions when 83 words of emotion are presented. 51 experimental subjects rated pictures on the degrees of expression in each of the two dimensions on a nine-point scale. The images were labeled with a rating averaged over all subjects. Examples of the images are shown in figure 1. Figure 2 shows a result of the dimension analysis of 44 emotion words related to internal emotion states. .

Fig. 1. Examples from the facial expression database in various internal states

Facial Expression Recognition Based on Emotion Dimensions on Manifold Learning

83

͑͵ΚΞΖΟΤΚΠΟ͑ͲΟΒΝΪΤΚΤ͑ΠΗ͑͑ͥͥ͑ͶΞΠΥΚΠΟ͑ΈΠΣΕΤ ͙ͪ͑ΡΠΚΟΥ͑ΤΔΒΝΖ͚

ͪ ΨΣΒΥΙ Β Σ Π Φ Τ Β Ν

ͩ

ͨ

ΙΠΡΖ

ΛΖΒΝΠΦΤΪ ΔΠΟΗΦΤΚΠΟ ΤΒΕΟΖΤΤ

ΖΒΘΖΣΟΖΤΤ

ΔΠΟΥΖΟΥΞΖΟΥ

ΝΠΟΖΝΚΟΖΤΤ

ΨΒΣΞΟΖΤΤ Τ Ν Ζ Ζ Ρ

ΒΟΟΠΪΒΟΔΖ ΕΚΤΒΡΡΠΚΟΥΟΖΤΤ

ΦΟΖΒΤΚΟΖΤΤ ΤΥΦΗΗΚΟΖΤΤ

ΝΠΟΘΚΟΘ

ΝΚΘΙΥΙΖΒΣΥΖΕΟΖΤΤ

ͤ

ΤΦΗΗΖΣΚΟΘ ΙΒΥΖ

ΤΠΣΣΚΟΖΤΤ

ΤΥΣΒΟΘΖΟΖΤΤ

ͦ

ͥ

ΨΠΣΣΪ

ΤΙΪΟΖΤΤ ΣΖΘΣΖΥ

ΘΣΒΥΚΗΚΔΒΥΚΠΟ ΤΒΥΚΤΗΒΔΥΚΠΟ

ΕΚΤΘΦΤΥ ΕΚΤΥΣΖΤΤ ΔΙΒΘΣΚΟ ΗΖΒΣ

ΤΥΣΒΚΟ

ΡΝΖΒΤΒΟΥΟΖΤΤ

ͧ

ΒΟΘΖΣ

ΤΦΣΡΣΚΤΖ

ΙΒΡΡΚΟΖΤΤ ΕΖΝΚΘΙΥ

ΔΠΞΗΠΣΥ

ΓΠΣΖΕΠΞ

ΣΖΤΥΚΟΘ ΤΝΖΖΡΚΟΖΤΤ

ͣ

ΧΒΔΒΟΥΟΖΤΤ ΥΚΣΖΕΟΖΤΤ

ΚΤΠΝΒΥΚΠΟ ΖΞΡΥΚΟΖΤΤ ΡΣΠΤΥΣΒΥΚΠΟ

͢

͡ ͡

͢

ͣ

ͤ

ͥ

ͦ

ͧ

ͨ

ͩ

ͪ

ΡΝΖΒΤΦΣΖ͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑͑ΕΚΤΡΝΖΒΤΦΣΖ

Fig. 2. The dimension analysis of 44 emotion words related to internal emotion states

3 Facial Expression Representation from Manifold Learning This section develops a representation of facial expression images based on locally linear embedding for feature extraction. This representation consists of two steps. In the first step, we perform a zero-phase whitening step for illumination-invariant images. Second step, facial expression representation from locally linear embedding was developed. 3.1 Preprocessing The face images used for this research were centered the face images with coordinates for eye and mouth locations, and then cropped and scaled to 20x20 pixels. The luminance was normalized in two steps. First, the rows of the images were concatenated to produce 1 × 400 dimensional vectors. The row means are subtracted from the dataset, X. Then X is passed through the zero-phase whitening filter, V, which is the inverse square root of the covariance matrix:

V = E { XX

1

T

} − 2 , Z = XV

(1)

This indicates that the mean is set to zero and the variances are equalized as unit variances. Secondly, we subtract the local mean gray-scale value from the sphered each patch. From this process, Z removes much of the variability due to lightening. Fig. 3(a) shows original images before preprocessing and Fig. 3(b) shows images after preprocessing.

84

Y.-s. Shin

(a)

(b)

Fig. 3. (a) original images before preprocessing (b) images after preprocessing

3.2 Locally Linear Embedding Representation Locally linear embedding algorithm[9] is to preserve local neighbor structure of data in both the embedding space and the observation space and is to map a given set of high-dimensional data points into a surrogate low-dimensional space. Similar expressions on continuous dimension of emotion can be existed in the local neighborhood on the manifold. And the mapping from the high-dimensional data points to the low dimensional points on the manifold is very important for dimensionality reduction. LLE can overcome the problem of nonlinear dimensionality reduction, and its algorithm does not involve local minima [9]. Therefore, we applied the locally linear embedding algorithm to feature extraction of facial expressions. LLE algorithm is used to obtain the corresponding low-dimensional data Y of the training set X. D by N matrix, X consists of N data item in D dimensions. Y, d by N matrix, consists of d < D dimensional embedding data for X. LLE algorithm can be described as follow. Step 1: compute the neighbors of each data point, X Step 2: compute the weights W that best reconstruct each data point from its neighbors, minimizing the cost in eq. (2) by two constraints. K

ε (W ) = xi − ∑Wij xij

2

(2)

j =1

First, each data point if

xi is reconstructed only from its neighbors, enforcing Wij = 0

xi and x j are not in the same neighbor. Second, the rows of the weight matrix

have sum-to-one constraint

∑W

ij

= 1 . These constraints compute the optimal

j =1

weights

Wij according to the least square. K means nearest neighbors per data point.

Step 3: compute the vectors Y best reconstructed by the weights W, minimizing the quadratic form in eq.(3) by its bottom nonzero eigenvectors.

Facial Expression Recognition Based on Emotion Dimensions on Manifold Learning

85

2

k

Φ(Y ) = yi − ∑Wij yij

(3)

j =1

This optimization is performed subjected to constraints. Considering that the cost

Φ(Y ) is invariant to translation in Y,

∑y

i

= 0 is to remove the degree of freedom

i

by requiring the coordinates to be centered on the origin. Also,

1 yi yiT = I is to ∑ N i

avoid degenerate solutions of Y=0. Therefore, eq.(3) can be described to an eigenvector decomposition problem as follow. k

2

Φ(Y ) = yi − ∑Wij yij j =1

= arg min ( I − W )Y

2

(4)

Y

= arg min Y T ( I − W )T ( I − W )Y Y

The optimal solution of

eq.(3)

is the smallest

eigenvectors of

matrix

( I − W ) ( I − W ) . The eigenvalues which are zero is discarded because discarding T

eigenvectors with eigenvalue zero enforces the constraint term. Thus we need to compute the bottom (d+1) eigenvectors of the matrix. Therefore we obtain the corresponding low-dimensional data set Y in embedding space from the training set X. Figure 4 shows facial expression images reconstructed from bottom (d+1) eigenvectors corresponding to the d+1 smallest eigenvalues discovered by LLE, with K=3 neighbors per data point. Especially, the first eight components d=8 discovered by LLE represent well features of facial expressions. Facial expression images of various internal states mapped into the embedding space described by the first two components of LLE (See Fig. 5). From figure 5, we can explore the structural nature of facial expressions in various internal states on embedding space modeled by LLE.

(a)

(b)

(c)

Fig. 4. Facial expression images reconstructed from bottom (d+1) eigenvectors (a) d=1, (b) d=3, and (c) d=8

86

Y.-s. Shin

Fig. 5. 318 facial expression images of various internal states mapped into the embedding space described by the first two components of LLE

The further a point is away from the center point, the higher is the intensity of displeasure and arousal dimensions. The center points coexists facial expression images of various internal states.

4 Result and Discussion Facial expression recognition in various internal states with features extracted by LLE algorithm was evaluated by 1-nearest neighborhood on two dimensional structure of emotion having pleasure/displeasure dimension and arousal/sleep dimension. 252 images for training and 66 images excluded from the training set for testing are used. The 66 images for test include 11 expression images of each six people. The class label which is recognized consists of four sections on two dimensional structure of emotion. Fig. 6 shows the sections of each class label. Table 1 gives a result of facial expression recognition recognized by proposed algorithm on two dimensions of emotion and indicates a part of all. The recognition result in the Pleasure/Displeasure dimension of test set showed 90.9% and 56.1% in the Arousal/Sleep dimension. In Table 1, the first column indicates the emotion words of 11 expression images used for testing, the second and third columns include each dimension value on bipolar dimensions of test data. The fourth column in Table 1 indicates the class label(C1,C2,C3,C4) of test data and the classification results recognized by proposed algorithm are shown in the fifth column.

Facial Expression Recognition Based on Emotion Dimensions on Manifold Learning a 10 r o u s a l 5 s l e e p

C3

C2

C4

C1

87

0 0 pleasure

5

10 displeasure

Fig. 6. The class region on two dimensional structure of emotion Table 1. A result data of facial expression recognition recognized by proposed algorithm (Abbreviation: P-D, pleasure/displeasure; A-S, arousal/sleep;) Emotion (person)

word Test data Class label Recognized class label on proposed P – D A – S of test data algorithm pleasantness (a) 1.40 5.47 3 3 depression (a) 6.00 4.23 1 1 crying(a) 7.13 6.17 2 2 gloomy(a) 5.90 3.67 1 1 strangeness(a) 6.13 6.47 2 1 proud(a) 2.97 5.17 3 1 confident(a) 2.90 4.07 4 3 despair(a) 7.80 5.67 1 1 sleepiness(a) 6.00 1.93 4 1 likable(a) 2.07 4.27 4 3 delight(a) 1.70 5.70 3 3 gloomy( b ) 6.60 3.83 1 2 strangeness( b ) 6.03 5.67 2 4 proud( b ) 2.00 4.53 4 3 confident( b ) 2.47 5.27 4 1 despair (b ) 6.47 5.03 2 2 sleepiness(b ) 6.50 3.80 1 1 likable(b) 1.83 4.97 4 4 delight(b) 2.10 5.63 3 4 boredom( b ) 6.47 5.73 2 3 tedious( b) 6.73 4.77 1 1 Jealousy(b ) 6.87 6.80 2 2

This paper explores two problems. One is to explore a new approach method to recognize facial expressions in various internal states using locally linear embedding algorithm. The other is to explore the structural nature of facial expressions in various internal states on embedding space modeled by LLE.

88

Y.-s. Shin

As a result of the first problem, the recognition results of each dimension through 1-nearest neighborhood were significant 90.9% in Pleasure/Displeasure dimension and 56.1% in the Arousal/Sleep dimension. The two dimensional structure of emotion in the facial expression recognition appears as a stabled structure for the facial expression recognition. Pleasure-Displeasure dimension is analyzed as a more stable dimension than Arousal-Sleep dimension. In second case, facial expressions in continuous dimension of emotion was showed a cross structure on locally linear embedding space. The further a point is away from the center point, the higher is the intensity of displeasure and arousal dimensions. From these results, we can know that facial expression structure on continuous dimension of emotion is very similar to structure represented by the manifold model. Thus our result may be analyzed that the relationship of facial expressions in various internal states can be facilitated on the manifold model. In the future work, we will consider learning invariant manifolds of facial expressions. Acknowledgements. This work was supported by the Korea Research Foundation Grant funded by the Korean Government (KRF-2005-042-D00285).

References 1. Russell, J. A.: Evidence of convergent validity on the dimension of affect. Journal of Personality and Social Psychology, 30, (1978) 1152-1168 2. Peter J. L.: The emotion probe: Studies of motivation and attention. American Psychologist, 50(5) (1995) 372-385 3. Donato, G., Bartlett, M., Hager, J., Ekman, P. and Sejnowski, T.: Classifying facial actions, IEEE PAMI, 21(10) (1999) 974-989 4. Schmidt, K., Cohn, J. :Dynamics of facial expression:Normative characteristics and individual difference, Intl. Conf. On Multimedia and Expo, 2001 5. Pantic, M., Rothkrantz, L.J.M.: Towards an Affect-Sensitive Multimodal Human Computer Interaction, Proc. Of IEEE. 91 1370-1390 6. Shin, Y., An, Y.: Facial expression recognition based on two dimensions without neutral expressions, LNCS(3711) (2005) 215-222 7. Bartlett, M.: Face Image analysis by unsupervised learning, Kluwer Academic Publishers (2001) 8. Seung, H. S., Lee, D.D.:The manifold ways of perception, Science (290), (2000) 22682269 9. Roweis, S.T., Saul, L.K..:Nonlinear Dimensionality reduction by locally linear embedding, Science (290), (2000) 2323-2326 10. Bahn, S., Han, J., Chung, C.: Facial expression database for mapping facial expression onto internal state. ’97 Emotion Conference of Korea, (1997) 215-219 11. Osgood, C. E., May, W.H. and Miron, M.S.: Cross-curtral universals of affective meaning. Urbana:University of Illinoise Press, (1975) 12. Russell, J. A., Lewicka, M. and Nitt, T.: A cross-cultural study of a circumplex model of affect. Journal of Personality and Social Psychology, 57, (1989) 848-856 13. Kim, Y., Kim, J., Park, S., Oh, K., Chung, C.: The study of dimension of internal states through word analysis about emotion. Korean Journal of the Science of Emotion and Sensibility, 1 (1998) 145-152

AI Framework for Decision Modeling in Behavioral Animation of Virtual Avatars A. Iglesias1 and F. Luengo2 1

Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. de los Castros, s/n, 39005, Santander, Spain 2 Department of Computer Science, University of Zulia, Post Office Box #527, Maracaibo, Venezuela [email protected], [email protected]

Abstract. One of the major current issues in Artificial Life is the decision modeling problem (also known as goal selection or action selection). Recently, some Artificial Intelligence (AI) techniques have been proposed to tackle this problem. This paper introduces a new based-on-ArtificialIntelligence framework for decision modeling. The framework is applied to generate realistic animations of virtual avatars evolving autonomously within a 3D environment and being able to follow intelligent behavioral patterns from the point of view of a human observer. Two examples of its application to different scenarios are also briefly reported.

1

Introduction

The realistic simulation and animation of the behavior of virtual avatars emulating human beings (also known as Artificial Life) has attracted much attention during the last few years [2,5,6,7,8,9,10,11,12,13]. A major goal in behavioral animation is the construction of an “intelligent” system able to integrate the different techniques required for the realistic simulation of the behavior of virtual humans. The challenge is to provide the virtual avatars with a high degree of autonomy, so that they can evolve freely, with a minimal input from the animator. In addition, this animation is expected to be realistic; in other words, the virtual avatars must behave according to reality from the point of view of a human observer. Recently, some Artificial Intelligence (AI) techniques have been proposed to tackle this problem [1,3,4,8]. This paper introduces a new based-on-ArtificialIntelligence framework for decision modeling. In particular, we apply several AI techniques (such as neural networks, expert systems, genetic algorithms, K-means) in order to create a sophisticated behavioral system that allows the avatars to take intelligent decisions by themselves. The framework is applied to generate realistic animations of virtual avatars evolving autonomously within a 3D environment and being able to follow intelligent behavioral patterns from the point of view of a human observer. Two examples of the application of this framework to different scenarios are briefly reported. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 89–96, 2007. c Springer-Verlag Berlin Heidelberg 2007 

90

A. Iglesias and F. Luengo

The structure of this paper is as follows: the main components of our behavioral system are described in detail in Section 2. Section 3 discusses the performance of this approach by means of two simple yet illustrative examples. Conclusions and future lines in Section 4 close the paper.

2

Behavioral System

In this section the main components of our behavioral system are described. 2.1

Environment Recognition

At the first step, a virtual world is generated and the virtual avatars are placed within. In the examples described in this paper, we have chosen a virtual park and a shopping center, carefully chosen environments that exhibit lots of potential objects-avatars interactions. In order to interact with the 3D world, each virtual avatar is equipped with a perception subsystem that includes a set of individual sensors to analyze the environment and capture relevant information. This analysis includes the determination of distances and positions of the different objects of the scene, so that the agent can move in this environment, avoid obstacles, identify other virtual avatars and take decisions accordingly. Further, each avatar has a predefined vision range (given by a distance threshold value determined by the user), and hence, objects far away from the avatar are considered to be visible only if the distance from the avatar to the object is less than such threshold value; otherwise, the object becomes invisible. All this information is subsequently sent to an analyzer subsystem, where it is processed by using a representation scheme based on genetic algorithms. This scheme has proved to be extremely useful for pattern recognition and identification. Given a pair of elements A and B and a sequence j, there is a distance function that determines how near these elements are. It is defined as k  |Aji − Bij |, where Aji denotes the ith gene at sequence j for dist(j, A, B) = k1 i=1

the chromosome A, and k denotes the number of genes of such a sequence. Note that we can think of sequences in terms of levels in a tree. The sequence j is simply the level j down the tree at which it appears, with the top of the tree as sequence 1. A and B are similar at sequence (or at level) j if dist(j, A, B) = 0. Note that this hierarchical structure implies that an arbitrary object is nearer to that minimizing the distance at earlier sequences. This simple expression provides a quite accurate procedure to classify objects at a glance, by simply comparing them sequentially at each depth level. 2.2

Knowledge Acquisition

Once new information is attained and processed by the analyzer, it is sent to the knowledge motor. This knowledge motor is actually the “brain” of our system. Its main components are depicted in Figure 1(left). Firstly, the current information

AI Framework for Decision Modeling in Behavioral Animation

91

Fig. 1. (left) Knowledge motor scheme; (right) goal selection subsystem scheme

is temporarily stored into the knowledge buffer, until new information is attained. At that time, previous information is sent to the knowledge updater (KU), the new one being stored into this knowledge buffer and so on. This KU updates both the memory area and the knowledge base. The memory area is a neural network applied to learn from data (in our problem, the information received from the environment through the perception subsystem). In this paper we consider the unsupervised learning, and hence we use an autoassociative scheme, since the inputs themselves are used as targets. To update the memory area, we employ the K-means least-squares partitioning algorithm for competitive networks, which are formed by an input and an output layer, connected by feed forward connections. Each input pattern represents a point in the configuration space (the space of inputs) where we want to obtain classes. This type of architecture is usually trained with a winner takes all algorithm, so that only those weights associated with the output neuron with largest value (the winner) are updated. The basic algorithm consists of two main steps: (1) compute cluster centroids and use them as new cluster seeds and (2) assign each chromosome to the nearest centroid. The basic idea behind this formulation is to overcome the limitation of having more data than neurons by allowing each neuron to store more than one data at the same time. The knowledge base is actually a based-on-rules expert system, containing both concrete knowledge (facts) and abstract knowledge (inference rules). Facts include complex relationships among the different elements (relative positions, etc.) and personal information about the avatars (personal data, schedule, hobbies or habits), i.e. what we call avatar’s characteristic patterns. Additional subsystems for tasks like learning, coherence control, action execution and others have also been incorporated. This deterministic expert system is subsequently modified by means of probabilistic rules, for which new data are used in order to update the probability of a particular event. Thus, the neuron does not exhibit a deterministic output but a probabilistic one: what is actually computed is the probability of a neuron to store a particular data at a particular time. This probability is continuously updated in order to adapt our recalls to the most recent data. This leads to the concept of reinforcement, based on the fact that the repetition of a particular event over time increases the probability to recall it.

92

A. Iglesias and F. Luengo

Of course, some particular data are associated with high-relevance events whose influence does not decrease over time. A learning rate parameter introduced in our scheme is intended to play this role. Finally, the request manager is the component that, on the basis of the information received from the previous modules, provides the information requested by the goal selection subsystem described in next section. 2.3

Decision Modeling

A central issue in behavioral animation is the adequate choice of appropriate mechanisms for decision modeling. Those mechanisms will take a decision about which is the next action to be carried out from a set of feasible actions. The fundamental task of any decision modeling module is to determine a based-onpriority sorted list of goals to be performed by the virtual agent. The goal’s priority is calculated as a combination of different avatar’s internal states (given by mathematical functions not described in this paper because of limitations of space) and external factors (which will determine the goal’s feasibility). Figure 1(right) shows the architecture of our goal selection subsystem, comprised of three modules and a goal database. The database stores a list of arrays (associated with each of the available goals at each time) comprised of: the goal ID, its feasibility rate (determined by the analyzer subsystem), the priority of such a goal, the wish rate (determined by the emotional analyzer), the time at which the goal is selected and its success rate. The emotional analyzer (EA) is the module responsible to update the wish rate of a goal (regardless its feasibility). Such a rate takes values on the interval [0, 100] according to some mathematical functions (not described here) that simulate human reactions in a very realistic way (as shown in Section 3). The intention planning (IP) module determines the priority of each goal. To this aim, it uses information such as the factibility and wish rate. From this point of view, it is rather similar to the “intention generator” of [13] except by the fact that decision for that system is exclusively based on rules. This module also comprises a buffer to store temporarily those goals interrupted for a while, so that the agent exhibits a certain “persistence of goals”. This feature is specially valuable to prevent avatars from the oscillatory behavior appearing when the current goal changes continuously. The last module is the action planning (AP), a based-on-rules expert system that gets information from the environment (via the knowledge motor), determines the sequence of actions to be carried out in order to achieve a particular goal and updates the goal’s status accordingly. 2.4

Action Planning and Execution

Once the goals and priorities are defined, this information is sent to the motion subsystem to be transformed into motion routines (just as the orders of our brain are sent to our muscles) and then animated in the virtual world. Currently, we

AI Framework for Decision Modeling in Behavioral Animation

93

Fig. 2. Example 1: screenshots of the virtual park environment

have implemented routines for path planning and obstacle avoidance. In particular, we have employed a modification of the A* path finding algorithm, based on the idea to prevent path recalculation until a new obstacle is reached. This simple procedure has yielded substantial savings in time in all our experiments. In addition, sophisticated collision avoidance algorithms have been incorporated into this system (see the examples described in Section 3).

3

Two Illustrative Examples

In this section, two illustrative examples are used to show the good performance of our approach. The examples are available from Internet at the URLs: http://personales.unican.es/iglesias/CGGM2007/samplex.mov (x = 1, 2). Figure 2 shows some screenshots from the first movie. In picture (a) a woman and her two children go into the park. The younger kid runs following some birds. After failing to capture them, he gets bored and joins his brother. Then, the group moves towards the wheel avoiding the trees and the seesaw (b). Simultaneously, other people (the husband and a girl) enter into the park. In (c) a kid is playing with the wheel while his brother gets frustrated after expecting to play with the seesaw (in fact, he was waiting for his brother besides the seesaw). After a while, he decides to join his brother and play with the wheel anyway. Once her children are safely playing, the woman relaxes and goes to meet her husband, who is seated on a bench (d). The girl is seated in front of them, reading a newspaper. Two more people go into the park: a man and a kid. The kid goes directly towards the playground, while the man sees the girl, becomes attracted by her and decides to sit down on the same bench, looking for a chat. As she does not want to chat with him, she stands up and leaves. The new kid goes to play with the wheel while the two brothers decide to play with the seesaw. The playground has two seesaws, so each brother goes towards the nearest one (e). Suddenly, they realize they must use the same one, so a brother changes his trajectory and moves towards the other seesaw. The mother is coming back in order to take after her children. Her husband also comes behind her and they

94

A. Iglesias and F. Luengo

Fig. 3. Temporal evolution of the internal states (top) and available goals’ wishes (bottom) for the second example in this paper

start to chat again (f). The man on a bench is now alone and getting upset so he decides to take a walk and look for the girl again. Simultaneously, she starts to make physical exercises (g). When the man realizes she’s busy and hence will not likely pay attention on him, he changes his plans and walks towards the couple, who are still chatting (g). The man realizes they are not interested to chat with him either, so he finally leaves the park. It is interesting to point out that the movie includes a number of remarkable motion and behavioral features. For instance, pictures (a)-(b)-(g) illustrate several of our motion algorithms: persecution, obstacle avoidance, path finding, interaction with objects (wheel, seesaw, bench) and other avatars, etc. People in the movie exhibit a remarkable ability to capture information from the environment and change their trajectories in real time. On the other hand, they also exhibit a human-like ability to realize about what is going on about others and change their plans accordingly. Each virtual avatar has previous knowledge on neither the environment nor other avatars, as it might happen in real life when people enter for the first time into a new place or know new people. The second scene consists of a shopping center at which the virtual avatars can perform a number of different actions, such as eat, drink, play videogames, sit down to rest and, of course, do shopping. We consider four virtual avatars: three kids and a woman. The pictures in Figure 3 are labelled with eight numbers indicating the different simulation’s milestones (the corresponding animation screenshots for those time units are displayed in Figure 4): (1) at the initial

AI Framework for Decision Modeling in Behavioral Animation

95

Fig. 4. Example 2: screenshots of the shopping center environment

step, the three kids go to play with the videogame machines, while the woman moves towards the eating area (indicate by the tables in the scene). Note that the internal state with the highest value for the avatar analyzed in this work is the energy, so the avatar is going to perform some kind of dynamic activity, such as to play; (2) the kid keeps playing (and their energy level going down) until his/her satisfaction reaches the maximum value. At that time, the anxiety increases, and avatar’s wish turns into performing a different activity. However, the goal play videogame has still the highest wish rate, so it will be in progress for a while; (3) at this simulation step, the anxiety reaches a local maximum again, meaning that the kid is getting bored about playing videogames. Simultaneously, the goal with the highest value is drink water, so the kid stops playing and looks for a drink machine; (4) at this time, the kid gets the drink machine, buys a can and drinks. Consequently, the internal state function thirsty decreases as the agent drinks until the status of this goal becomes goal attained; (5) Once this goal is satisfied, the goal play videogames is the new current goal. So, the kid comes back towards the videogame machines; (6) however, the energy level is very low, so the goal play videogames is interrupted, and the kid looks for a bench to sit down and have a rest; (7) once seated, the energy level turns up and the goal have a rest does not apply anymore; (8) since the previous goal play videogames is still in progress, the agent comes back and plays again. Figure 3 shows the temporal evolution of the internal states (top) and the goals’ wishes (bottom) for one of the kids. Similar graphics can be obtained for the other avatars (they are not included here because of limitations of space). The picture on the top displays the temporal evolution of the five internal state functions (valued onto the interval [0, 100]) considered in this example, namely, energy, shyness, anxiety, hunger and thirsty. On the bottom, the wish rate (also valued onto the interval [0, 100]) of the feasible goals (have a rest, eat something, drink water, take a walk and play videogame) is depicted.

96

4

A. Iglesias and F. Luengo

Conclusions and Future Work

The core of this paper is the realistic simulation of the human behavior of virtual avatars living in a virtual 3D world. To this purpose, the paper introduces a behavioral system that uses several Artificial Intelligence techniques so that the avatars can behave in an intelligent and autonomous way. Future lines of research include the determination of new functions and parameters to reproduce human actions and decisions and the improvement of both the interaction with users and the quality of graphics. Financial support from the Spanish Ministry of Education and Science (Project Ref. #TIN2006-13615) is acknowledged.

References 1. Funge, J., Tu, X. Terzopoulos, D.: Cognitive modeling: knowledge, reasoning and planning for intelligent characters, SIGGRAPH’99, (1999) 29-38 2. Geiger, C., Latzel, M.: Prototyping of complex plan based behavior for 3D actors, Fourth Int. Conf. on Autonomous Agents, ACM Press, NY (2000) 451-458 3. Granieri, J.P., Becket, W., Reich, B.D., Crabtree, J., Badler, N.I.: Behavioral control for real-time simulated human agents, Symposium on Interactive 3D Graphics, ACM, New York (1995) 173-180 4. Grzeszczuk, R., Terzopoulos, D., Hinton, G.: NeuroAnimator: fast neural network emulation and control of physics-based models. SIGGRAPH’98 (1998) 9-20 5. Iglesias A., Luengo, F.: New goal selection scheme for behavioral animation of intelligent virtual agents. IEICE Trans. on Inf. and Systems, E88-D(5) (2005) 865-871 6. Luengo, F., Iglesias A.: A new architecture for simulating the behavior of virtual agents. Lectures Notes in Computer Science, 2657 (2003) 935-944 7. Luengo, F., Iglesias A.: Framework for simulating the human behavior for intelligent virtual agents. Lectures Notes in Computer Science, 3039 (2004) Part I: Framework architecture. 229-236; Part II: Behavioral system 237-244 8. Monzani, J.S., Caicedo, A., Thalmann, D.: Integrating behavioral animation techniques. EUROGRAPHICS’2001, Computer Graphics Forum 20(3) (2001) 309-318 9. Raupp, S., Thalmann, D.: Hierarchical model for real time simulation of virtual human crowds. IEEE Trans. Visual. and Computer Graphics. 7(2) (2001) 152-164 10. Sanchez, S., Balet, O., Luga, H., Dutheu, Y.; Autonomous virtual actors. Lectures Notes in Computer Science 3015 (2004) 68-78 11. de Sevin, E., Thalmann, D.: The complexity of testing a motivational model of action selection for virtual humans, Proceedings of Computer Graphics International, IEEE CS Press, Los Alamitos, CA (2004) 540-543 12. Thalmann, D., Monzani, J.S.: Behavioural animation of virtual humans: what kind of law and rules? Proc. Computer Animation 2002, IEEE CS Press (2002)154-163 13. Tu, X., Terzopoulos, D.: Artificial fishes: physics, locomotion, perception, behavior. Proceedings of ACM SIGGRAPH’94 (1994) 43-50

Studies on Shape Feature Combination and Efficient Categorization of 3D Models Tianyang Lv1,2, Guobao Liu1, Jiming Pang1, and Zhengxuan Wang1 1

2

College of Computer Science and Technology, Jilin University, Changchun, China College of Computer Science and Technology, Harbin Engineering University, Harbin, China [email protected]

Abstract. In the field of 3D model retrieval, the combination of different kinds of shape feature is a promising way to improve retrieval performance. And the efficient categorization of 3D models is critical for organizing models. The paper proposes a combination method, which automatically decides the fixed weight of different shape features. Based on the combined shape feature, the paper applies the cluster analysis technique to efficiently categorize 3D models according to their shape. The standard 3D model database, Princeton Shape Benchmark, is adopted in experiment and our method shows good performance not only in improving retrieval performance but also in categorization. Keywords: Shape-based 3D model retrieval; feature combination; categorization; clustering.

1 Introduction With the proliferation of 3D models and their wide spread through internet, 3D model retrieval emerges as a new field of multimedia retrieval and has great application value in industry, military etc.. [1]. Similar to the studies in image or video retrieval, researches in 3D model retrieval concentrate on the content-based retrieval way [2], especially the shape-based retrieval. The major problem of shape-based retrieval is extracting model’s shape feature, which should satisfy the good properties, such as rotation invariant, representing various kinds of shape, describing similar shape with similar feature, etc… Although many methods for extracting shape feature have been proposed [3], researches show that none is the best for all kinds of shapes [4, 5, 6, 7]. To solve this problem, it is an effective way to combine different shape features [5, 6, 7]. The critical step of the combination is determining the weights of shape features. For instance, ref. [5] determines the fixed weight due to user’s experience, which is based on numerous experiments; meanwhile it decides the dynamic weight based on the retrieval result and the categorization of 3D models. However, the shortcomings of these methods are: need user experience to decide the appropriate fixed weight, and cannot appoint weight for new feature; it is too time consuming to compute the dynamic weight, while its performance is just a little better than the fix-weighted way. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 97–104, 2007. © Springer-Verlag Berlin Heidelberg 2007

98

T. Lv et al.

Moreover, it is still an open problem to categorize 3D models. Nowadays, the categorization of 3D models depends on manual work, such as Princeton Shape Benchmark (PSB) [4]. Even if the drawback of time consuming is not taken into consideration, the manual way also results in the following mistakes: first, models with similar shape are classified into different classes, like the One Story Home class and Barn class of PSB; second, models with apparently different shapes are classified into the same class, like the Potted Plant class and Stair class. Table 1 states the detail. It is because that human categorize 3D models according to their semantics in the real life, instead of their shape. Table 1. Mistakes of manual categorization of PSB

One Story Home

Barn

Potted Plant

Stair

To solve these problems, the paper conducts researches in two aspects: first, we analyzes the influence of the value of weight on the combination performance, and proposes an method, which automatically decide the value of the fixed weight; second, we introduces an efficient way for categorizing 3D models based on clustering result. The rest of the paper is organized as follows: Section 2 introduces the automatic combination method; Section 3 states the categorization based on clustering result; Section 4 gives the experimental results of PSB; and Section 5 summarizes the paper.

2 An Automatic Decision Method of Features’ Fixed-Weight When combining different shape features with fixed weight, the distance dcom between model q and model o is computed as follows: l

d com ( q, o) = ∑ wi i =1

d i ( q, o ) max(d i (q))

(1)

Where l is the number of the different shape features, wi is the fixed weight of the ith shape feature, di(q, o) is the distance between q and o under the ith shape feature vector, and max(di(q)) is maximum distance of q and the others. Previous researches show that the Manhattan distance performs better than the Euclidian distance, thus the paper adopts the Manhattan distance in computing di(q, o).

Studies on Shape Feature Combination and Efficient Categorization of 3D Models

99

In this paper, four kinds of feature extraction methods are adopted and 5 sets of feature vectors are obtained from PSB. The detail is stated as follows: (1) the shape feature extracting method based on depth-buffer [11], termed DBD, which obtains the feature vector with 438 dimensions; (2) the method based on EDT [12], termed NEDT, which obtains the vector with 544 dimensions; (3) the method based on spherical harmonic [13], which obtains two sets of vectors with 32 dimensions and 136 dimensions, termed RSH-32 and RSH-136 respectively; (4) the method performing the spherical harmonic transformation on the voxelized models, termed SHVF, which obtains the feature vector with 256 dimensions. We conduct experiment on PSB to analyze the influence of the value of wi on the combination performance. Table 2 evaluates the performance of combining any two out of 5 different features of PSB. The weight of each feature is equal and the criterion R-precision is adopted [8]. It can be seen that there co-exist the good cases, like combining DBD and NEDT, and the bad cases, like combining DBD and RSH-32. But if the fixed weight (4:4:2:1:1) decided according to our experience is adopted, the performance is much better. Table 2. Combination performance comparision under different fixed weights DBD NEDT wi

RSH -136

RSH -32

SHVF DBD

NEDT

RSH -136

RSH -32

SHVF

1

1

1

1

1

4

4

2

1

1

+DBD

0.354

-- --

-- --

-- --

-- --

0.354

-- --

-- --

-- --

-- --

+NEDT

0.390

0.346

-- --

-- --

-- --

0.390

0.346

-- --

-- --

-- --

+RSH-136 0.364

0.376

0.292

-- --

-- --

0.372

0.378

0.292

-- --

-- --

+RSH-32

0.283

0.286

0.258 0.168

-- --

0.351

0.343

0.279 0.168

+SHVF

0.308

0.308

0.286 0.204

0.201

0.360

0.350

0.299 0.204 0.201

-- --

This experiment shows that the appropriate value of wi can greatly improve the combination performance. Although wi decided due to experience performs well, it has the limitations like time consuming and hard to popularize. Thus, it is necessary to automatically decide wi. To accomplish this task, we suppose that if a feature is the best for most models, its weight should be the highest. And if one feature is the best for a model, its weight should be summed by 1/N, where N is the total number of models. As for a set of classified 3D models, we follow the “winner-take-all” rule. It means that if the ith feature is the best for the jth class Cj of models, wi is summed by nj/N, where nj is the size of Cj. Finally, states the automatic decision formula of the fixed-weights wi as follows:

∑ f (C )* n

nClass

wi =

j =1

i

j

N

j

(2)

100

T. Lv et al.

Where nClass is the number of the classes of models; fi(Cj)=1, iff the R-precision of the l

ith feature is the highest for Cj, otherwise fi(Cj)=0. And

∑w i =1

i

= 1.

Obviously, the proposed method can automatically decide the fixed weight for a new shape feature by re-computing the Formula (2). During this process, the weights of the existing features are also adjusted.

3 Efficient Categorization of 3D Models Based on Clustering Result As an unsupervised technique, cluster analysis technique is a promising candidate for categorizing 3D models. It is good at discovering the concentration situation of the feature vectors without prior knowledge. Since the models with similar feature are grouped together and their feature reflects their shape, the clustering result can be considered as a categorization of 3D models based on their shape. Ref. [10] performs the research in this field. However, it relies on just one kind of shape feature, thus the clustering result is highly sensitive to the performance of shape feature. In contrast, the paper adopts the proposed fix-weighted feature combination method and achieves a much better and more stable shape descriptor of 3D model. The distance among models is computed according to Formula (1) and (2). The X-means algorithm is selected to analyze the shape feature set of 3D models. X-means is an important improvement of the well known method K-means. To overcome the highly dependency of K-means on the pre-decided number k of clusters, Xmeans requires but not restricts to the parameter k. Its basic idea is that: in an iteration of clustering, it splits the center of selected clusters into two children and decides whether a child survives. During clustering, the formula BIC(c x)=L(x c)k(m+1)/2*logn is used to decide the appropriate opportunity to stop clustering, where L(x c) is the log-likelihood of dataset x according to the model c, and m is the dimensionality of the data. In this way, the appropriate number of clusters can be automatically decided. Although X-means is efficient in classifying models according to their shape, there still exist mistakes in the clustering result for two reasons:







(1) Due to the complexity and diversity of models’ shape, it is very difficult to describe all shapes. The combination of different shape features can partially solve this problem, but still has its limit. (2) X-means may make clustering mistakes. Up to now, it seems that the clustering process ensure most data is clustered into the right groups, but not every data. Thus, we import human correction to correct the mistakes lies in the clustering result. To avoid mistakes caused by manual intervene, like those in Table 1, we make the restriction that a user can just delete some models from a cluster or delete the whole cluster. And the pruned models are considered as the wrongly categorized and are labeled as “unclassified”. Finally, the refined clustering result is treated as the categorization of 3D models.

Studies on Shape Feature Combination and Efficient Categorization of 3D Models

101

In comparison with the pure manual work, the categorization base on clustering result is much more efficient and objective. The clustering technique not only shows the number of classes according to models’ shape, but also states the member of a class.

4 Experiment and Analysis We conduct series experiments on the standard 3D model database, Princeton Shape Benchmark, which owns 1814 models. And the 5 sets of shape feature vector introduced in Section 2 are used for combination. First, we analyze the performance of the automatic fix-weighted combination. According to Formula (2), the automatic fixed weight for 5 features are DBD=0.288, NEDT=0.373, RSH-136=0.208, RSH-32=0.044 and SHVF=0.087. And Table 3 states the R-Precision and improvement after combining any two features for PSB. Compared with Table 2, the performance of the automatic fix-weighted combination is much better. The highest improvement is 24%=((0.208-0.168)/ 0.168), while the best combination improves by 9.6%=((0.388-0.354)/0.354). Table 3. The performance of combining two features based on the automatic fixed weight DBD

NEDT

RSH-136

RSH-32

SHVF

R-Precision/Improvement +DBD

0.354/--

-- --

-- --

-- --

-- --

+NEDT

0.388/+9.6%

0.346/--

-- --

-- --

-- --

+RSH-136

0.368/+4.8%

0.378/+9.3%

0.292/--

-- --

-- --

+RSH-32

0.360/+1.7%

0.353/+2.0%

0.298/+2.1%

0.168/--

-- --

+SHVF

0.356/+0.6%

0.350/+1.2%

0.302/+3.4%

0.208/+24%

0.201/--

Fig. 1. states the Precision-Recall curves along with R-Precision of 5 features, the combination of 5 features based on equal fixed weight (Equal Weight), the combination using fixed weight (4:4:2:1:1) decided by experience (Experience Weight), and the combination adopting the proposed automatic fixed weight (Automatic Weight). It can be seen that the proposed method is the best under all criterions. It achieves the best R-Precision 0.4046, which is much better than that of the Equal Weight 0.3486 and is also slightly better than the Experience Weight 0.4021. And its performance improved by 14.5% than the best single feature DBD. After combining 5 features based on the proposed method, we adopt X-means to analyze the PSB, and 130 clusters are finally obtained. In scanning these clusters, we found that most clusters are formed by the models with similar shape, like the cluster C70, C110, C112, C113 in Table 4. However, there also exist mistakes, such as C43 in Table 4. After analyzing the combined feature of those wrong models, we find that the mistakes are mainly caused by the shape feature, instead of clustering.

102

T. Lv et al.

Fig. 1. Performance comparison adopting Precision-Recall and R-Precision Table 4. Detail of some result clusters of PSB

C70

C110

C112

Studies on Shape Feature Combination and Efficient Categorization of 3D Models

103

Table 4. (continued)

C113

C43

Then, we select 3 students that never contact these models to refine the clustering result. At least two of them must reach an agreement for each deletion. In less than 2 hours, including the time costs on arguments, they labeled 202 models as the unclassified out of 1814, viz. 11.13%, and 6 clusters out of 130 are pruned, viz. 4.6%. Obviously, the clustering result is a valuable reference for categorizing 3D models. Even if the refinement time is included, the categorization based on clustering result is much faster than the pure manual work, which usually costs days and is exhaustive.

5 Conclusions The paper proposes a combination method, which automatically decides the fixed weights of different shape features. Based on the combined feature, the paper categorizes 3D models according to their shape. Experimental result shows that the proposed method shows good performance not only in improving retrieval performance but also in categorization. Future work will concentrate on the study of clustering ensemble to achieve a much stable clustering result of 3D models.

Acknowledgements This work is sponsored by Foundation for the Doctoral Program of the Chinese Ministry of Education under Grant No.20060183041 and the Natural Science Research Foundation of Harbin Engineering University under the grant number HEUFT05007.

References [1] T.Funkhouser, et al. A Search Engine for 3D Models. ACM Transactions on Graphics.22 (1), (2003) 85-105. [2] Yubin Yang, Hui Li, Qing Zhu. Content-Based 3D Model Retrieval: A Survey. Chinese Journal of Computer. (2004), Vol. 27, No. 10, Pages: 1298-1310.

104

T. Lv et al.

[3] Chenyang Cui, Jiaoying Shi. Analysis of Feature Extraction in 3D Model Retrieval. Journal of Computer-Aided Design & Computer Graphics. Vol.16, No.7, July (2004). pp. 882-889. [4] Shilane P., Min P., Kazhdan M., Funkhouser T.. The Princeton Shape Benchmark. In Proceedings of the Shape Modeling International 2004 (SMI'04), Genova, Italy, June 2004. pp. 388-399. [5] Feature Combination and Relevance Feedback for 3D Model Retrieval. The 11th International Conference on Multi Media Modeling (MMM 2005), 12-14 January 2005, Melbourne, Australia. IEEE Computer Society 2005. pp. 334-339. [6] Ryutarou Ohbuchi, Yushin Hata,Combining Multiresolution Shape Descriptors for Effective 3D Similarity Search Proc. WSCG 2006, Plzen, Czech Republic, (2006). [7] Atmosukarto I., Wee Kheng Leow, Zhiyong Huang. Feature Combination and Relevance Feedback for 3D Model Retrieval. Proceedings of the 11th International Multimedia Modelling Conference, (2005). [8] R. Baeza-Yates, B. Ribeiro-Neto. Modern Information Retrieval. Addison-Wesley, (1999). [9] Dan Pelleg, Andrew Moore. X-means: Extending K-means with Efficient Estimation of the Number of Clusters. In Proc. 2000 Int. Conf. on Data Mining. (2000). [10] Tianyang Lv, etc. An Auto-Stopped Hierarchical Clustering Algorithm for Analyzing 3D Model Database. The 9th European Conference on Principles and Practice of Knowledge Discovery in Databases. In: Lecture Notes on Artificial Intelligent, Vol. 3801, pp. 601 – 608. [11] M. Heczko, D. Keim, D. Saupe, and D. Vranic. Methods for similarity search on 3D databases. Datenbank-Spektrum, 2(2):54–63, (2002). In German. [12] H. Blum. A transformation for extracting new descriptors of shape. In W. Wathen-Dunn, editor, Proc. Models for the Perception of Speech and Visual Form, pages 362{380, Cambridge, MA, November 1967. MIT Press. [13] Kazhdan Michael , Funkhouser Thomas. Harmonic 3D shape matching [A]. In : Computer Graphics Proceedings Annual Conference Series , ACM SIGGRAPH Technical Sketch , San Autonio , Texas , (2002)

A Generalised-Mutual-Information-Based Oracle for Hierarchical Radiosity Jaume Rigau, Miquel Feixas, and Mateu Sbert Institut d’Inform` atica i Aplicacions Campus Montilivi P-IV, 17071-Girona, Spain jaume.rigau|miquel.feixas|[email protected]

Abstract. One of the main problems in the radiosity method is how to discretise a scene into mesh elements that allow us to accurately represent illumination. In this paper we present a new refinement criterion for hierarchical radiosity based on the continuous and discrete generalised mutual information measures between two patches or elements. These measures, derived from the generalised entropy of Harvda-Charv´ at-Tsallis, express the information transfer within a scene. The results obtained improve on the ones based on kernel smoothness and Shannon mutual information.

1

Introduction

The radiosity method solves the problem of illumination in an environment with diffuse surfaces by using a finite element approach [1]. The scene discretisation has to represent the illumination accurately by trying to avoid unnecessary subdivisions that would increase the computation time. A good meshing strategy will balance the requirements of accuracy and computational cost. In the hierarchical radiosity algorithms [2] the mesh is generated adaptively: when the constant radiosity assumption on a patch is not valid for the radiosity received from another patch, the refinement algorithm will subdivide it in a set of subpatches or elements. A refinement criterion, called oracle, informs us if a subdivision of the surfaces is needed, bearing in mind that the cost of the oracle should remain acceptable. In [3,4], the difficulty in obtaining a precise solution for the scene radiosity has been related to the degree of dependence between all the elements of the adaptive mesh. This dependence has been quantified by the mutual information, which is a measure of the information transfer in a scene. In this paper, a new oracle based on the generalised mutual information [5], derived from the generalised entropy of Harvda-Charv´at-Tsallis [6], is introduced. This oracle is obtained from the difference between the continuous and discrete generalised mutual information between two elements of the adaptive mesh and expresses the loss of information transfer between two patches due to the discretisation. The results obtained show that this oracle improves on the kernel smoothness-based [7] and the mutual information-based [8,9] ones, confirming the usefulness of the information-theoretic approach in dealing with the radiosity problem. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 105–113, 2007. c Springer-Verlag Berlin Heidelberg 2007 

106

2 2.1

J. Rigau, M. Feixas, and M. Sbert

Preliminaries Radiosity

The radiosity method uses a finite element approach, discretising the diffuse environment into patches and considering the radiosities, emissivities and reflectances constant over them. With these assumptions, the discrete radiosity equation [1] is given by  Bi = Ei + ρi Fij Bj , (1) j∈S

where S is the set of patches of the scene, Bi , Ei , and ρi , are respectively the radiosity, emissivity, and reflectance of patch i, Bj is the radiosity of patch j, and Fij is the patch-to-patch form factor, defined by   1 F (x, y)dAy dAx , (2) Fij = Ai Si Sj where Ai is the area of patch i, Si and Sj are, respectively, the surfaces of patches i and j, F (x, y) is the point-to-point form factor between x ∈ Si and y ∈ Sj , and dAx and dAy are, respectively, the differential areas at points x and y. Using Monte Carlo computation with area-to-area sampling, Fij can be calculated: Fij ≈ Aj

1 |Si×j |



F (x, y),

(3)

(x,y)∈Si×j

where the computation accuracy depends on the number of random segments between i and j (|Si×j |). To solve the system (1), a hierarchical refinement algorithm is used. The efficiency of this algorithm depends on the election of a good refinement criterion. Many refinement oracles have been proposed in the literature (see [10] for details). For comparison purposes, we review here the oracle based on kernel smoothness (KS), proposed by Gortler et al. [7] in order to drive hierarchical refinement with higher-order approximations. When applied to constant approximations, this refinement criterion is given by ρi max{Fijmax − Fijavg , Fijavg − Fijmin }Aj Bj < ,

(4)

where Fijmax = max{F (x, y) | x ∈ Si , y ∈ Sj } and Fijmin = min{F (x, y) | x ∈ Si , y ∈ Sj } are, respectively, the maximum and minimum radiosity kernel values estimated by taking the maximum and minimum value computed between pairs of random points on both elements, and Fijavg = Fij /Aj is the average radiosity kernel value. 2.2

HCT Entropy and Generalised Mutual Information

In 1967, Harvda and Charv´ at [6] introduced a new generalised definition of entropy. In 1988, Tsallis [11] used this entropy in order to generalise the BoltzmannGibbs entropy in statistical mechanics.

A Generalised-Mutual-Information-Based Oracle

107

Definition 1. The Harvda-Charv´ at-Tsallis entropy (HCT entropy) of a discrete random variable X, with |X| = n and pX as its probability distribution, is defined by n 1 − i=1 pα i , (5) Hα (X) = k α−1 where k is a positive constant (by default k = 1) and α ∈ \{1} is called entropic index.

Ê

This nentropy recovers the Shannon discrete entropy when α → 1, H1 (X) ≡ − i=1 pi ln pi , and fulfils good properties such as non-negativity and concavity. On the other hand, Taneja [5] and Tsallis [12] introduced the generalised mutual information. Definition 2. The generalised mutual information between two discrete random variables (X, Y ) is defined by ⎛ ⎞ n  m  pα 1 ⎝ ij ⎠, 1− (6) Iα (X, Y ) = α−1 α−1 1−α p q i j i=1 j=1 where |X| = n, |Y | = m, pX and qY are the marginal probability distributions, and pXY is the joint probability distribution between X and Y . The transition of Iα (X, Y ) to the continuous generalised mutual information is straightforward. Using entropies, an alternative form is given by Iα (X, Y ) = Hα (X) + Hα (Y ) − (1 − α)Hα (X)Hα (Y ) − Hα (X, Y ).

(7)

Shannon mutual information (MI) is obtained when α → 1. Some alternative ways for the generalised mutual information can be seen in [13].

3

Generalised Mutual Information-Based Oracle

We will see below how the generalised mutual information can be used to build a refinement oracle within a hierarchical radiosity algorithm. Our strategy will be based on the estimate of the discretisation error from the difference between the continuous and discrete generalised mutual information (6) between two elements of the adaptive mesh. The discretisation error based on Shannon mutual information was introduced by Feixas et al. [8] and applied to hierarchical radiosity with good results. In the context of a discrete scene information channel [4], the marginal probabilities are given by pX = qY = {ai } (i.e., the distribution of the relative area of patches: AATi , where AT is the total area of scene) and the joint probability is given by pXY = {ai Fij }. Then, Definition 3. The discrete generalised mutual information of a scene is given by ⎛ ⎞ α   aα  F 1 ⎝ i ij ⎠= 1− Iα = τα (ai Fij , ai aj ), (8) α−1 α−1 1−α ai aj i∈S j∈S

i∈S j∈S

108

J. Rigau, M. Feixas, and M. Sbert

where, using 1 = obtained.



 i∈S

j∈S

ai aj and τα (p, q) =

α α 1 q −p 1−α qα−1 ,

the last equality is

This measure quantifies the discrete information transfer in a discretised scene. The term τα (ai Fij , ai aj ) can be considered as an element of the generalised mutual information matrix Iα , representing the information transfer between patches i and j. To compute Iα , the Monte Carlo area-to-area sampling (3) is used, obtaining for each pair of elements α α α 1 aα i aj − ai Fij 1 − α aα−1 aα−1 i j ⎛ ⎛ ⎞α ⎞  1 1 Ai Aj ⎝ ⎝ 1 − Aα ≈ F (x, y)⎠ ⎠ . T 1 − α AT AT |Si×j |

Iαij = τα (ai Fij , ai aj ) =

(9)

(x,y)∈Si×j

The information transfer between two patches can be obtained more accurately using the continuous generalised mutual information between them. From the discrete form (8) and using the pdfs p(x) = q(y) = A1T and p(x, y) = 1 AT F (x, y), we define Definition 4. The continuous generalised mutual information of a scene is given by

  1 1 c Iα = τα F (x, y), 2 dAy dAx . (10) AT AT S S This represents the continuous information transfer in a scene. We can split the integration domain and for two surface elements i and j we have

  1 1 c Iαij = τα F (x, y), 2 dAy dAx (11) AT AT Si Sj that, analogously to the discrete case, expresses the information transfer between two patches. Both continuous expressions, (10) and (11), can be solved by Monte Carlo integration. Taking again area-to-area sampling (i.e., pdf Ai1Aj ), the last expression (11) can be approximated by

1 1 F (x, y), 2 |Si×j | AT AT (x,y)∈Si×j ⎛ ⎛ ⎞⎞  1 Ai Aj ⎝ 1 ⎝ = 1 − Aα F (x, y)α ⎠⎠ . T 1 − α AT AT |Si×j |

Iαc ij ≈ Ai Aj

1





τα

(x,y)∈Si×j

Now, we define

(12)

A Generalised-Mutual-Information-Based Oracle

Definition 5. The generalised discretisation error of a scene is given by  Δα = Iαc − Iα = Δαij ,

109

(13)

i∈S j∈S

where Δαij = Iαc ij − Iαij . While Δα expresses the loss of information transfer in a scene due to the discretisation, the term Δαij gives us this loss between two elements i and j. This difference is interpreted as the benefit to be gained by refining and can be used as the base of the new oracle. From (13), using (9) and (12), we obtain Δαij ≈ Ai Aj Aα−2 T ⎛

where

δαij = ⎝

1 |Si×j |



1 δα , 1 − α ij

⎞α F (x, y)⎠ −

(x,y)∈Si×j

1 |Si×j |

(14)



F (x, y)α .

(15)

(x,y)∈Si×j

Accordingly to the radiosity equation (1) and in analogy to classic oracles, like KS, we consider the oracle structure ρi σBj < , where σ is the geometric kernel [14]. Now, we propose to take the generalised discretisation error between two patches as the kernel (σ = Δαij ) for the new oracle based on generalised mutual information (GMIα ). To simplify the expression of this oracle, we multiply the inequality by the scene constant AT 2−α (1 − α). Definition 6. The hierarchical radiosity oracle based on the generalised mutual information is defined by (16) ρi Ai Aj δαij Bj < .

4

Results

In this section, the GMIα oracle is compared with the KS and MI ones. Other comparisons, with a more extended analysis, can be found in [14]. All oracles have been implemented on top of the hierarchical Monte Carlo radiosity method. In Fig. 1 we show the results obtained for the KS (a) and GMIα oracles with their Gouraud shaded solutions and meshes. In the GMIα case, we show the results obtained with the entropic indexes 1 (b) (i.e., note that GMI1 = MI) and 0.5 (c). For the sake of comparison, adaptive meshes of identical size have been generated with the same cost for the power distribution: around 19,000 patches and 2,684,000 rays, respectively. To estimate the form factor, the number of random lines has been fixed to 10. In Table 1, we show the Root Mean Square Error (RMSE) and Peak Signal Noise Ratio (PSNR) measures for KS and GMIα (for 5 different entropic indexes) oracles for the test scene. These measures have been computed with respect to the corresponding converged image, obtained with a path-tracing algorithm

110

J. Rigau, M. Feixas, and M. Sbert

(a.i) KS

(a.ii) KS

(b.i) GMI1.00

(b.ii) GMI1.00

(c.i) GMI0.50

(c.ii) GMI0.50

Fig. 1. (a) KS and GMIα (entropic indexes (b) 1 and (c) 0.5) oracles. By columns, (i) Gouraud shaded solution of view1 and (ii) mesh of view2 are shown.

with 1,024 samples per pixel in a stratified way. For each measure, we consider a uniform weight for every colour channel (RMSEa and PSNRa ) and a perceptual one (RMSEp and PSNRp ) in accordance with the sRGB system. Observe in the view1 , obtained with GMIα (Fig. 1.i.b-c), the finer details of the shadow cast on the wall by the chair on the right-hand side and also the better-defined shadow on the chair on the left-hand side and the one cast by the desk. In view2 (Fig. 1.ii) we can also see how our oracle outperforms the KS, especially in the much more defined shadow of the chair on the right. Note the superior quality mesh created by our oracle.

A Generalised-Mutual-Information-Based Oracle

111

Table 1. The RMSE and PSNR measures of the KS and GMIα oracles applied to the test scene of Fig. 1, where the KS and GMIα∈{0.5,1} results are shown. The oracles have been evaluated with 10 random lines between each two elements. oracle KS GMI1.50 GMI1.25 GMI1.00 GMI0.75 GMI0.50

RMSEa 13.791 11.889 10.872 9.998 9.555 9.370

view1 RMSEp PSNRa 13.128 25.339 11.280 26.628 10.173 27.405 9.232 28.133 8.786 28.526 8.568 28.696

PSNRp RMSEa 25.767 15.167 27.084 13.046 27.982 11.903 28.825 10.438 29.254 10.010 29.473 9.548

(i)

view2 RMSEp PSNRa 14.354 24.513 12.473 25.821 11.279 26.618 9.709 27.758 9.257 28.122 8.740 28.533

PSNRp 24.991 26.211 27.086 28.387 28.801 29.300

(ii)

Fig. 2. GMI0.50 oracle: (i) Gouraud shadow solution and (ii) mesh are shown Table 2. The RMSE and PSNR measures of the GMIα oracle applied to the scene of Fig. 2, where the GMI0.5 result is shown. The oracle has been evaluated with 10 random lines between each two elements. oracle RMSEa GMI1.50 16.529 GMI1.25 15.199 GMI1.00 14.958 GMI0.75 14.802 GMI0.50 14.679

RMSEp 15.530 14.145 13.844 13.683 13.573

PSNRa 23.766 24.494 24.633 24.724 24.797

PSNRp 24.307 25.119 25.306 25.407 25.477

In general, the improvement obtained with the GMIα oracle is significant. Moreover, its behaviour denotes a tendency to improve towards subextensive entropic indexes (α < 1). To observe this tendency, another test scene is shown in Fig. 2 for an entropic index 0.5. Its corresponding RMSE and PSNR measures are presented in Table 2. The meshes are made up of 10,000 patches with 9,268,000 rays to distribute the power and we have kept 10 random lines to evaluate the oracle between elements.

112

5

J. Rigau, M. Feixas, and M. Sbert

Conclusions

We have presented a new generalised-mutual-information-based oracle for hierarchical radiosity, calculated from the difference between the continuous and discrete generalised mutual information between two elements of the adaptive mesh. This measure expresses the loss of information transfer between two patches due to the discretisation. The objective of the new oracle is to reduce the loss of information, obtaining an optimum mesh. The results achieved improve on the classic methods significantly, being better even than the version based on the Shannon mutual information. In all the tests performed, the best behaviour is obtained with subextensive indexes. Acknowledgments. This report has been funded in part with grant numbers: IST-2-004363 of the European Community - Commission of the European Communities, and TIN2004-07451-C03-01 and HH2004-001 of the Ministry of Education and Science (Spanish Government).

References 1. Goral, C.M., Torrance, K.E., Greenberg, D.P., Battaile, B.: Modelling the interaction of light between diffuse surfaces. Computer Graphics (Proceedings of SIGGRAPH ’84) 18(3) (July 1984) 213–222 2. Hanrahan, P., Salzman, D., Aupperle, L.: A rapid hierarchical radiosity algorithm. Computer Graphics (Proceedings of SIGGRAPH ’91) 25(4) (July 1991) 197–206 3. Feixas, M., del Acebo, E., Bekaert, P., Sbert, M.: An information theory framework for the analysis of scene complexity. Computer Graphics Forum (Proceedings of Eurographics ’99) 18(3) (September 1999) 95–106 4. Feixas, M.: An Information-Theory Framework for the Study of the Complexity of Visibility and Radiosity in a Scene. PhD thesis, Universitat Polit`ecnica de Catalunya, Barcelona, Spain (Desember 2002) 5. Taneja, I.J.: Bivariate measures of type α and their applications. Tamkang Journal of Mathematics 19(3) (1988) 63–74 6. Havrda, J., Charv´ at, F.: Quantication method of classication processes. Concept of structural α-entropy. Kybernetika (1967) 30–35 7. Gortler, S.J., Schr¨ oder, P., Cohen, M.F., Hanrahan, P.: Wavelet radiosity. In Kajiya, J.T., ed.: Computer Graphics (Proceedings of SIGGRAPH ’93). Volume 27 of Annual Conference Series. (August 1993) 221–230 8. Feixas, M., Rigau, J., Bekaert, P., Sbert, M.: Information-theoretic oracle based on kernel smoothness for hierarchical radiosity. In: Short Presentations (Eurographics ’02). (September 2002) 325–333 9. Rigau, J., Feixas, M., Sbert, M.: Information-theory-based oracles for hierarchical radiosity. In Kumar, V., Gavrilova, M.L., Tan, C., L’Ecuyer, P., eds.: Computational Science and Its Applications - ICCSA 2003. Number 2669-3 in Lecture Notes in Computer Science. Springer-Verlag (May 2003) 275–284 10. Bekaert, P.: Hierarchical and Stochastic Algorithms for Radiosity. PhD thesis, Katholieke Universiteit Leuven, Leuven, Belgium (December 1999)

A Generalised-Mutual-Information-Based Oracle

113

11. Tsallis, C.: Possible generalization of Boltzmann-Gibbs statistics. Journal of Statistical Physics 52(1/2) (1988) 479–487 12. Tsallis, C.: Generalized entropy-based criterion for consistent testing. Physical Review E 58 (1998) 1442–1445 13. Taneja, I.J.: On generalized information measures and their applications. In: Advances in Electronics and Electron Physics. Volume 76. Academic Press Ltd. (1989) 327–413 14. Rigau, J.: Information-Theoretic Refinement Criteria for Image Synthesis. PhD thesis, Universitat Polit`ecnica de Catalunya, Barcelona, Spain (November 2006)

Rendering Technique for Colored Paper Mosaic Youngsup Park, Sanghyun Seo, YongJae Gi, Hanna Song, and Kyunghyun Yoon CG Lab., CS&E, ChungAng University, 221, HeokSuk-dong, DongJak-gu, Seoul, Korea {cookie,shseo,yj1023,comely1004,khyoon}@cglab.cse.cau.ac.kr http://cglab.cse.cau.ac.kr

Abstract. The work presented in this paper shows the way to generate colored paper mosaics using computer graphics techniques. Following two tasks need to be done to generate colored paper mosaic. The first one is to generate colored paper tile and the other one is to arrange the tile. Voronoi Diagram and Random Point Displacement have been used in this paper to come up with the shape of the tile. And, energy value that the tile has depending on its location is the factor to determine the best positioning of the tile. This paper focuses on representing the overlap among tiles, maintenance of the edge of input image, and various shapes of tiles in the final output image by solving two tasks mentioned above. Keywords: Colored paper mosaic, Tile generation and Tile arrangement.

1

Introduction

Mosaic is an artwork formed by lots of small pieces called tile. It can be expressed in many different ways depending on the type and the position of tile. Photomosaics[1] shows the big image formed with small square tiles that are laid out on a grid pattern. Distinctive output was driven from the process of combining multiple images into one image. While Photomosaics shows the arrangement of tiles in a grid pattern, Simulated Decorative Mosaic[2] has tiles arranged in the direction of the edge of input image. This shows the similar pattern found in ancient Byzantine period. This pattern can also be found in Jigsaw Image Mosaics[3]. The only difference is to use various shape of image tiles instead of a single-colored square tiles. In this paper, we show especially how colored paper mosaic among various styles of mosaic artworks can be represented using computer graphics techniques. To generate colored paper mosaic, following two issues need to be taken care. The first issue is to decide on the shape of colored paper tile and the second one is to arrange colored paper tile. Voronoi Diagram[9] and Random Fractal have been used in this paper to come up with the shape of colored paper tile. But the problem using Voronoi Diagram is that it makes the form of tile too plain since it generates only convex polygon. Therefore the method represented in this paper uses the predefined data of colored paper as database Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 114–121, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Rendering Technique for Colored Paper Mosaic

115

like Photomosaics. Then, it creates small piece of colored paper tile by clipping Voronoi polygon repetitively from the data of colored paper. Many different shapes of tiles like concave polygon can be expressed since it is made from repetitive tearing of one colored paper. And the energy value that colored paper tile has depending on its location is calculated to find the best positioning of colored paper tile. The location that has the biggest sum of energy value is defined as the best position. Tiles are placed at the point where the summation of energy value is the biggest by being moved and rotated toward the nearest edge. 1.1

Related Work

Existing mosaic studies focus on the selection, the generation, and the arrangement of tiles. We comparison the existing studies by classifying into two groups. The studies of first group focus on the selection and the arrangement of tiles since they use fixed or predefined shapes of tiles. Photomasaics[1] creates image formed with various small pieces of image tiles. It is an algorithm that lays out selected image from the database in a grid pattern. It proposes an effective method of tile selection from database. But it is hard to keep the edge of image since the shape of tile in Photomosaic is all square ones. In the study of Simulated Decorative Mosaic[2], Hausner shows the similar pattern and techniques used in Byzantine era by positioning single-colored square tile in the direction of the edge of input image. It uses the methods of Centroidal Voronoi Diagram (CVD) and Edge Avoidance to arrange tiles densely. In Jigsaw Image Mosaics (JIM)[3], it shows extended technique by using arbitrary shapes of image tiles while Simulated Decorative Mosaic uses single-colored square tiles. It proposes solution of tile arrangement with Energy Minimization Framework. The studies of second group propose the method only about the generation of tiles. Park[5] proposes passive colored paper mosaic generating technique that shape and arrangement of tiles is all done by the user’s input. The proposed method uses Random Fractal technique for generating torn shaped colored paper tile. However, it gives the user too much works to do. To solve the problem passive technique has, automatic colored paper mosaic[6] using Voronoi Diagram is proposed. The majority works are done by computer and only part the user needs to do is to input a few parameters. It reduces heavy load of work on user side; however, it cannot maintain the edge of image since it arranges tiles without considering the edge. In order to solve this problem, another new technique[7] is suggested. In this new technique, it arranges Voronoi sites using Quad-Tree and clips the tiles according to the edge of image once it goes out of the edge. Even though this technique can keep the edge of images, it cannot express the real texture and shape of colored paper tearing since the polygon created using Voronoi Diagram becomes in a convex form and each polygon is not overlapped. Therefore, the existing study is not showing various shapes of tiles and the overlap among them.

116

2 2.1

Y. Park et al.

Preprocessing Data Structure of Colored Paper

The data structure of colored paper is organized with 2 layers that contain the information such as the texture image and vertex shown as figure 1. The upper layer means visible space of colored paper that has the color value. And the lower layer means the white paper represented on the torn portion. To define the data structure of colored paper in advance gives two good aspects. The first one is that it can express various shape of colored paper tile like concave polygon besides convex one. This is because previously used colored paper is stored in the buffer and polygon clipping is repeated using Vonoroi Diagram as necessary. The other one is that different type of paper mosaic can be easily done by modifying data structure. If the image is used magazine, newspaper and so on instead of colored paper then it will be possible to come up with paper mosaic like Collage.

Fig. 1. The data structure of colored paper object

2.2

Image Segmentation

At first, the necessary image processing works[11] like blurring are performed on the input image and the image is divided into several regions that have similar color in LUV space by using Mean-Shift image segmentation technique[8]. We call the region container. And the proposed mosaic algorithm is performed per container. However Mean-Shift segmentation can create small containers. If mosaic processing is performed in this stage, the colored paper tiles will not be attached to these small containers so it will result in lots of grout spaces in the result image as show in figure 4. Therefore, there is another step needed to integrate these small containers. To give flexibility and to show individual intention of expression, the process of integration of small containers is controlled by the user’s input.

3 3.1

The Generation of Colored Paper Tile Determination of Size and Color

To determine the size and the color of tile, the initial position where tile is attached is determined in advance by Hill-Climbing algorithm[4]. Hill-Climbing

Rendering Technique for Colored Paper Mosaic

117

algorithm keeps changing the position till the function value converges to optimal point. Since normally big tiles are applied primarily from boundary rather than small ones in real life, the function is determined like equation 1 with following two factors: size and boundary. The size factor is defined by D(x, y) that means the minimum distance value between pixel (x, y) and boundary. And the boundary factor is defined by D(x, y) − D(i, j) that means the sum of difference between neighbor pixels. Therefore, the position that has the largest value of L(x, y) is regarded as an initial position. L(x, y) =

x+1 

y+1 

D(x, y) − D(i, j) · [x = i&y = j]

(1)

i=x−1 j=y−1

The size of colored paper tile is determined by distance from boundary. At first, we divide boundary pixels into two groups. The first group has smaller value than y of initial position, And the second group has larger value than y. Then, between two minimum distance values on each group, the small value set as the minimum size and the large value set as the maximum size. Colored paper that has the similar color as the one in the initial position is selected. Colored paper is defined as single-colored. First of all, square area is built around the initial position for the size of tile and then the average value of RGB color in that area is selected. 3.2

Determination of Shape

There are two steps to determine the shape of colored paper tile. The first one is to determine the overall outline of tile to be clipped and the other one is to express torn effect. Voronoi Diagram is applied to decide the overall outline of tile. First, the area of colored paper is divided into several grids according to the size of tile to be torn. Then, Voronoi diagram is created by placing individual Voronoi site in each segment as shown in figure 2(b). The generated Voronoi diagram contains multiple Voronoi polygons so it needs to be decided to clip which polygon among them. Considering the fact that people start to clip from the boundary of the paper in real mosaic work, the polygon located near the boundary is decided to be torn first. Since there is always vertex in the boundary of colored paper as shown in figure 2(c), one of the polygons that contain vertex is randomly chosen. Once the outline of tile is determined by Voronoi polygon, it is necessary to apply torn effect to the boundary of determined outline. This torn effect is done by applying Random Point Displacement that is one of the Random Fractal techniques to colored paper’s layer individually. Random Point Displacement algorithm is applied to the boundary of selected Voronoi polygons that is not overlapped with the boundary of colored paper. The irregularity of torn surface and white-colored portion can be expressed by perturbing the random point of edge continuously in the vertical direction. Lastly, clip the modified Voronoi polygon by Random Point Displacement algorithm as shown in figure 2(d).

118

Y. Park et al.

(a) Colored paper

(b) Voronoi

(c) Torn Effect

(d) Clipping

Fig. 2. The process of paper tearing

4

The Arrangement of Colored Paper Tile

There are two things to consider arranging colored paper tiles. First one is to maintain the edge of the input image and the other one is to get rid of empty spaces among tiles or between tile and the edge of the image. To maintain the edges of input image, the similar technique to Energy Minimization Framework of Jigsaw Image Mosaics is used in this paper. The energy function is defined first depending on the position of tile and the sum of it is calculated like E(x, y) in equation 2. E(x, y) = Pi − Po − Pt ⎧ /T ⎨ Pi = Tmax /2 − D(x, y) where (x, y) ∈ C and (x, y) ∈ Po = Wo · D(x, y) where (x, y) ∈ /C ⎩ where (x, y) ∈ T Pt = Wt · D(x, y)

(2)

Pi , Po , Pt shown in the expression above mean the number of pixels located in the inside of container, outside of container, and on the overlapped area with other tiles. And Wo , Wt represent weight value depending on each location of pixel. The bigger the value of sum E(x, y) is the better the position is to maintain the edges of input image. Therefore, the tile needs to be placed where the sum of E(x, y) is the greatest. To get rid of empty spaces among tiles and between the tile and the edge of image, tile is moved and rotated into the direction of the nearest edge. This movement and rotation is continuously happening till the sum of E(x, y) from equation 2 has convergence value or is not getting bigger

(a) The best case

(b) The worst case (c) less overlapping Fig. 3. Positioning of colored paper tile

(d) edge keeping

Rendering Technique for Colored Paper Mosaic

119

any longer. The figure 3 shows four different situation of tile arrangement. The figure 3(b) shows the case that the tile is positioned outside of the edge of the image. Correction on tile’s location needs to be done since it prevents the tile from keeping the edge of the image. Two tiles are overlapped too much in the figure 3(c) and it also needs to be modified. The figure 3(d) shows the optimal arrangement of the tile. We can control this by adjusting the value of Wo and Wt .

5

Results

The figure 4, 5 and 6 shows the result image rendered by assigning the size of tile of the source image between 4 and 100. The result shown in the figure 4 is the result of colored paper mosaic processed by only applying segmentation algorithm to the source image. The grout space appears where is segmented

(a)

(b)

Fig. 4. The examples that have lots of grout spaces

Fig. 5. The result of colored paper mosaic

120

Y. Park et al.

Fig. 6. The result of colored paper mosaic with height map

smaller than the size 4 since the minimum size of the tile is set to 4. These smaller containers have to be integrated into the near container in order to get rid of grout spaces. The result of colored paper mosaic including container integration step is shown in the figure 5. In the figure 5, the grout spaces shown in figure 4(a) are disappeared. Also, lots of small segments are removed by integration so the number of smaller size of tiles is reduced. And we can apply the texture effect to the result image by using texture mapping, height map[10], and alpha blending as shown in the figure 6. By adding these effects, the mosaic image gets more realistic.

6

Discussion and Future Work

The work presented in this paper shows the new method to generate colored paper tile with computer graphics techniques. The difference that this paper has is that it can maintain the edges of the input image and express the various shape of tile and overlaps among tiles. These three achievements are shown in the figure 4, 5 and 6. The proposed method has some problems. First, too many small tiles are filled in between large tiles in the results. It is because grout spaces appear between the tile and the edge during the process of arranging the tile. It causes the problem to the quality of result image so it needs to be improved afterward. Therefore, another step to consider the edge of image during the tile generation is necessary. This additional step will reduce the generation of grout spaces among tiles or between the tile and the edge of image. Second, the performance of whole process is very low, since the tile arrangement is performed per pixel. Therefore it is needed to apply GPU or any other algorithms for improving the performance. This paper also has some benefits like following. First, the proposed method can express the various shapes of tile and overlapping between other tiles. Second,

Rendering Technique for Colored Paper Mosaic

121

if other types of paper like newspaper are used instead of colored paper then it will be possible to come up with another type of mosaic like Collage. It is easy to express other type of mosaic in computer graphics by modifying the data structure if more detailed and elaborate tile selection algorithm is applied.

References 1. Silver.R and Hawley.M (ed.): Photomosaics, New York: Henry Holt, 1997 2. Alejo Hausner : Simulating Decorative Mosaics, SIGGRAPH 2001, pp.573-580, 2001. 3. Junhwan Kim, Fabio Pellacini : Jigsaw Image Mosaics, SIGGRAPH 2002, pp. 657-664, 2002. 4. Chris Allen : A Hillclimbing Approach to Image Mosaics, UW-L Journal of Undergraduate Research , 2004 5. Young-Sup Park, Sung-Ye Kim, Cheung-Woon Jho, Kyung-Hyun Yoon : Mosaic Techniques using color paper, Proceeding of KCGS Conference, pp.42-47, 2000 6. Sang-Hyun Seo, Young-Sup Park, Sung-Ye Kim, Kyung-Hyun Yoon : Colored Paper Mosaic Rendering, In SIGGRAPH 2001 Abstrac ts and Applications, p.156, 2001 7. Sang-Hyun Seo, Dae-Uk Kang, Young-Sup Park, Kyung-Hyun Yoon : Colored Paper Mosaic Rendering Based on Image Segmentation, Proceeding of KCGS Conference, pp27-34, 2001 8. D. Comanicu, P. Meer : Mean shift: A robust approach toward feature space analysis, IEEE Transaction on Pattern Analysis and Machine Intelligence, 24, 603-619, May 2002 9. Mark de Berg, M. V. Kerveld, M. Overmars and O. Schwarzkopf : Computational Geometry Algorithms and Applications, Springer, pp.145-161, 1997 10. Aaron Hertzmann : Fast Paint Texture, NPAR 2002, 2002 11. Rafael C. Gonzalez and Richard E. Woods : Digital Image Processing 2nd Edition, publish ed by Prentice Hall, 2002

Real-Time Simulation of Surface Gravity Ocean Waves Based on the TMA Spectrum Namkyung Lee1 , Nakhoon Baek2, , and Kwan Woo Ryu1 1

Dept. of Computer Engineering, Kyungpook National Univ., Daegu 702-701, Korea [email protected],[email protected] 2 School of EECS, Kyungpook National Univ., Daegu 702-701, Korea [email protected] http://isaac.knu.ac.kr/~hope/tma.htm Abstract. In this paper, we present a real-time method to display ocean surface gravity waves for various computer graphics applications. Starting from a precise surface gravity model in oceanography, we derive its implementation model and our prototype implementation shows more than 50 frames per second on Intel Core2 Duo 2.40GHz PC’s. Our major contributions will be the improvement on the expression power of ocean waves and providing more user-controllable parameters for various wave shapes. Keywords: Computer graphics, Simulation, Ocean wave, TMA.

1

Introduction

Realistic simulation of natural phenomena is one of the interesting and important issues in computer graphics related areas including computer games and animations. In this paper, we are focusing on the ocean waves, for which we have many research results but not a complete solution yet[1]. Waves on the surface of the ocean are primarily generated by winds and gravity. Although the ocean wave includes internal waves, tides, edge waves and others, it is clear that we should display at least the surface gravity waves on the computer screen, to finally represent the ocean. In oceanography, there are many research results to mathematically model the surface waves in the ocean. Simple sinusoidal or trochoidal expressions can approximate a simple ocean wave. Real world waves are a comprised form of these simple waves, and called wave trains. In computer graphics, we can classify the related results into two categories. The first one uses fluid dynamics equations in a similar way used in the scientific simulation field. We have a number of results with the capability of obtaining realistic animations of complex water surfaces[2,3,4,5,6]. However, these results are hard to apply to large scenes of waters such as oceans, mainly due to their heavy computation. The other category is based on the ocean wave models from the oceanography, and consists of three approaches. The first group uses the Gerstner swell model. Fournier[7] concentrated on the shallow water waves and surf along a shore line. He started from parametric equations and added control parameters to simulate 

Corresponding author.

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 122–129, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Real-Time Simulation of Surface Gravity Ocean Waves

123

various shapes of shallow water waves, but not for the large-scale ocean scenes and/or deep-depth ocean waves. More complex parametric equations to present the propagation of water waves had been introduced by Gonzato[8]. This model is well suited for modeling propagating water of wave front, but its equations are too complex for large-scale ocean waves. Another group regards the ocean surface as a height field with a prescribed spectrum based on the experimental observations from oceanography. Mastin[9] introduced an effective simulation of wave behavior using Fast Fourier Transform(FFT). The height field is constructed through inverse FFT of the frequency spectrum of the real world ocean waves. It can produce complex wave patterns similar to real world ocean waves. Tessendorf[10] showed that dispersive propagation could be managed in the frequency domain and that the resulting field could be modified to yield trochoid waves. However, the negative aspect of FFT based methods is homogeneity: we cannot handle any local properties such as refraction, reflection, and others. The last one is the hybrid approach: The spectrum synthesized by a spectral approach is used to control the trochoids generated by the Gerstner model. Hinsinger[11] presented an adaptive scheme for the animation and display of ocean waves in real time. It relied on a procedural wave model which expresses surface point displacements as sums of wave trains. In this paper, we aim to construct an ocean wave model with the following characteristics: • Real time capability: They usually want to display a large scale ocean scene and some special effects may be added to the scene. So, we need to generate the ocean wave in real time. • More user-controllable parameters: We will provide more parameters to generate variety of ocean scenes including deep and shallow oceans, windy and calm oceans, etc. • Focusing on the surface gravity waves: Since we target the large-scale ocean, minor details of the ocean wave are not our major interest. In fact, the minor details can be easily super-imposed to the surface gravity waves, if needed. In the following sections, we will present a new hybrid approach to finally get a real-time surface gravity wave simulation. Since it is a kind of hybrid approach, it can generate large scale oceans without difficulty, and works in real time, to be suffi- ciently used with computer generated animations or other special effects. Additionally, we use a more precise wave model and have more controllable parameters including depth of sea, fetch length, wind speed, and so on, in comparison with previous hybrid approaches. We will start from the theoretical ocean wave models in the following section, and build up our implementation model. Our implementation results and conclusions will be followed.

2

The Ocean Wave Model

The major generating force for waves is the wind acting on the interface between the air and the water. From the mathematical point of view, the surface is made

124

N. Lee, N. Baek, and K.W. Ryu

up of many sinusoidal waves generated by the wind, and they are traveling through the ocean. One of the fundamental models for the ocean wave is the Gerstner swell model, in which the trajectory of a water particle is expressed as a circle of radius r around its reference location at rest, (x0 , z0 ), as follows[11]: x = x0 + r sin(ωt − kx0 ) z = z0 + r cos(ωt − kz0 ),

(1)

where (x, z) is the actual location at time t, ω=2πf is the pulsation with the frequency f , and k=2π/λ is the wave number with respect to the wave length of λ. Equation (1) shows a two-dimensional representation of the ocean wave, assuming that the x-axis coincides to the direction of wave propagation. The surface of an ocean is actually made up of a finite sum of these simple waves, and the height z of the water surface on the grid point (x, y) at time t can be expressed as: z(x, y, t) =

n 

Ai cos (ki (x cos θi +y sin θi )−ωi t+ϕi ) ,

(2)

i

where n is the number of wave trains, Ai is the amplitude, ki is the wave number, θi is the direction of wave propagation on the xy-plane and ϕi is the phase. In Hinsinger[11], they manually selected all these parameters, and thus, the user may meet difficulties to select proper values of them. In contrast, Thon[12] uses a spectrum-based method to find some reasonable parameter sets. They used the Pierson-Moskowitz(PM) model[13], which empirically expresses a fully developed sea in terms of the wave frequency f as follows:  0.0081 g 2 − 54 ffp 4 e , EPM (f ) = (2π)4 f 5 where EPM (f ) is the spectrum, g is the gravity constant and fp = 0.13 g/U10 is a peak of frequency depending on the wind speed U10 at a height of 10 meters above the sea surface. Although Thon used the PM model to give some impressive results, the PM model itself assumes the infinite depth of the ocean and thus may fail to the shallow sea cases. To overcome this drawback, the JONSWAP model and TMA model are introduced. The JONSWAP(Joint North Sea Wave Project) model[14] is developed for fetch-limited seas such as North sea and expressed as follows: f /fp −1  α g 2 − 54 ffp 4 e− 2σ2 e · γ , EJONSWAP (f ) = (2π)4 f 5 where α is the scaling parameter, γ is the peak enhancement factor, and σ is evaluated as 0.07 for f ≤ fp and 0.09 otherwise. Given the fetch length F , the frequency at the spectral peak fp is calculated as follows:  2 −0.33 g F fp = 3.5 . 3 U10

Real-Time Simulation of Surface Gravity Ocean Waves

125

The Texel, Marson and Arsole(TMA) model[15] extends the JONSWAP model to include the depth of water h as one of its implicit parameters as follows: ETMA (f ) = EJONSWAP (f ) · Φ(f ∗ , h), where Φ(f ∗ , h) is the Kitaigorodoskii depth function:   1 K Φ(f ∗ , h) = 1 + , s(f ∗ ) sinh K  with f ∗ = f h/g, K = 2(f ∗ )2 s(f ∗ ) and s(f ∗ ) = tanh−1 [(2πf ∗ )2 h]. The TMA model shows good empirical behavior even with the water depth of 6 meters. Thus, it is possible to represent the waves on the surface of lake or smallsize ponds, in addition to the ocean waves. Additionally, it also includes the fetch length as a parameter, inherited from the JONSWAP model. Thus, the expression power of the TMA model is much increased in comparison with the PM model previously used by other researchers. We use this more improved wave model to finally achieve more realistic ocean scenes with more user-controllable parameters.

3

The Implementation Model

To derive implementation-related expressions, we need to extend the spectrum of TMA model to two dimensional world as follows[14]: E(f, δ) = ETMA (f )D(f, δ), where D(f, δ) is a directional spreading factor that weights the spectrum at angle δ from the downwind direction. The spreading factor is expressed as follows:   δ −1 2p D(f, δ) = Np cos , 2 where p = 9.77(f /fp)μ , Np = 21−2p π · Γ (2p + 1)/Γ 2(p + 1) with Euler’s Gamma function Γ and  4.06, if f < fp . μ= −2.34, otherwise For more convenience in its implementation, we will derive some evaluation functions for the parameters including frequency, amplitude, wave direction, wave number and pulsation. The frequency of each wave train is determined from the peak frequency fp and a random offset to simulate the irregularity of the ocean waves. Thereafter, the pulsation and wave number is naturally calculated by their definition. According to the random linear wave theory[16,17,18,19,20], directional wave spectrum E(f, δ) is given by E(f, δ) = Ψ (k(f ), δ) · k(f )

dk(f ) , df

(3)

126

N. Lee, N. Baek, and K.W. Ryu

where k(f ) = 4π 2 f 2 /g and Ψ (k(f ), δ) is a wave number spectrum. The second and the third term in Equation (3) can be computed as: k(f )

dk(f ) 32 π 2 f 3 . = df g2

This allows us to re-write Equation (3) as follows[17]: E(f, δ) = Ψ (k(f ), δ)

32 π 2 f 3 . g2

From the random linear wave[17,19], the wave number spectrum Ψ (k(f ), δ) can be approximated as: β A(f )2 , Ψ (k(f ), δ) = 4π 2 where β is a constant. Finally, the amplitude A(f ) of a wave train is evaluated as: E(f, δ) g 2 ETMA (f ) D(f, δ) g 2 A(f ) = = . 3 8f β 8f 3 β Using all these derivations, we can calculate the parameter values for Equation (2). And then, we evaluate the height of each grid point (x, y) to construct a rectangular mesh representing the ocean surface.

4

Implementation Results

Figures 1, 2 and 3 are some outputs from the prototype implementation. As shown We implemented the ocean wave generation program based on the TMA model presented in the previous section. It uses plain OpenGL library and does not use any multi-threading or hardware-based acceleration techniques. At this time, we focused on the expression power of our TMA model-based implementation, and thus, our prototype implementation lacks some acceleration or optimization factors. Even though, it shows more than 50 frames per second on a PC with Intel

(a) wind speed 3m/s, water depth 5m (b) wind speed 3m/s, water depth 100m Fig. 1. Ocean waves with different water depths: Even with the same wind speed, different water depths result in very different waves. We use the fetch length of 5km for these images.

Real-Time Simulation of Surface Gravity Ocean Waves

127

(a) wind speed 3m/s, water depth 100m (b) wind speed 6m/s, water depth 100m Fig. 2. Ocean waves with different wind velocities: Changes in wind speed generate more clam or more choppy waves. The fetch length of 10 km is used for each of these images.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

Fig. 3. An animated sequence of ocean waves

128

N. Lee, N. Baek, and K.W. Ryu

Core2 Duo 6600 2.40GHz processor and a GeForce 7950GT based graphics card. We expect that the frame rate will be much better in the next version. In Figure 1, we can control the depth of the ocean to show very different waves even with the same wind speed and the same fetch length. Especially, the changes in the water depth are acceptable only for the TMA model, while the previous PM model cannot handle it. Figure 2 shows the effect of changing the wind speed. As expected, the high wind speed generates more choppy waves. Figure 3 is a sequence of images captured during the real time animation of the windy ocean. All examples are executed with mesh resolution of 200 × 200. More examples are in our web page, http://isaac.knu.ac.kr/~hope/tma.htm.

5

Conclusion

In this paper, we present a real-time surface gravity wave simulation method, derived from a precise ocean wave model in the oceanography. We started from a precise ocean wave model of TMA model, which has not been used for a graphics implementation, at least to our knowledge. Since we used a more precise ocean wave model, users can control more parameters to create various ocean scenes. The two major improvements of our method in comparison with the previous works will be: • Enhanced expression power: Our method can display visually plausible scenes even for shallow seas. • Improved user controllability: Our method provides more parameters such as fetch length and depth of water, in addition to the wind velocity. We implemented a prototype system, and showed that it can generate animated sequences of ocean waves in real time. We plan to integrate our implementation to large-scale applications such as games, maritime training simulators, etc. Some detailed variations to the ocean waves can also be added to our implementation with minor modifications.

Acknowledgements This research was supported by the Regional Innovation Industry Promotion Project which was conducted by the Ministry of Commerce, Industry and Energy(MOCIE) of the Korean Government (70000187-2006-01).

References 1. Iglesias, A.: Computer graphics for water modeling and rendering: a survey. Future Generation Comp. Syst. 20(8) (2004) 1355–1374 2. Enright, D., Marschner, S., Fedkiw, R.: Animation and rendering of complex water surfaces. In: SIGGRAPH ’02. (2002) 736–744 3. Foster, N., Fedkiw, R.: Practical animation of liquids. In: SIGGRAPH ’01. (2001) 23–30

Real-Time Simulation of Surface Gravity Ocean Waves

129

4. Foster, N., Metaxas, D.N.: Realistic animation of liquids. CVGIP: Graphical Model and Image Processing 58(5) (1996) 471–483 5. Foster, N., Metaxas, D.N.: Controlling fluid animation. In: Computer Graphics International ’97. (1997) 178–188 6. Stam, J.: Stable fluids. In: SIGGRAPH ’99. (1999) 121–128 7. Fournier, A., Reeves, W.T.: A simple model of ocean waves. In: SIGGRAPH ’86. (1986) 75–84 8. Gonzato, J.C., Sa¨ec, B.L.: On modelling and rendering ocean scenes. J. of Visualization and Computer Animation 11(1) (2000) 27–37 9. Mastin, G.A., Watterberg, P.A., Mareda, J.F.: Fourier synthesis of ocean scenes. IEEE Comput. Graph. Appl. 7(3) (1987) 16–23 10. Tessendorf, J.: Simulating ocean water. In: SIGGRAPH ’01 Course Notes. (2001) 11. Hinsinger, D., Neyret, F., Cani, M.P.: Interactive animation of ocean waves. In: SCA ’02: Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation. (2002) 161–166 12. Thon, S., Dischler, J.M., Ghazanfarpour, D.: Ocean waves synthesis using a spectrum-based turbulence function. In: Computer Graphics International ’00. (2000) 65– 13. Pierson, W., Moskowitz, L.: A proposed spectral form for fully developed wind seas based on the similarity theory of S.A. kitaigorodskii. J. Geophysical Research (69) (1964) 5181–5190 14. Hasselmann, D., Dunckel, M., Ewing, J.: Directional wave spectra observed during JONSWAP 1973. J. Physical Oceanography 10(8) (1980) 1264–1280 15. Bouws, E., G¨ unther, H., Rosenthal, W., Vincent, C.L.: Similarity of the wind wave spectrum in finite depth water: Part 1. spectral form. J. Geophysical Research 90 (1985) 975–986 16. Crawford, F.: Waves. McGraw-Hill (1977) 17. Krogstad, H., Arntsen, Ø.: Linear Wave Theory. Norwegian Univ. of Sci. and Tech. (2006) http://www.bygg.ntnu.no/ oivarn/. 18. Seyringer, H.: Nature wizard (2006) http://folk.ntnu.no/oivarn/hercules ntnu/ LWTcourse/. 19. Sorensen, R.: Basic Coastal Engineering. Springer-Verlag (2006) 20. US Army Corps of Engineers Internet Publishing Group: Coastal engineering manual – part II (2006) http://www.usace.army.mil/publications/engmanuals/em1110-2-1100/PartII/PartII.htm.

Determining Knots with Quadratic Polynomial Precision Zhang Caiming1,2 , Ji Xiuhua1 , and Liu Hui1 1

School of Computer Science and Technology, University of Shandong Economics, Jinan 250014, China 2 School of Computer Science and Technology, Shandong University, Jinan 250061, China

Abstract. A new method for determining knots in parametric curve interpolation is presented. The determined knots have a quadratic polynomial precision in the sense that an interpolation scheme which reproduces quadratic polynomials would reproduce parametric quadratic polynomials if the new method is used to determine knots in the interpolation process. Testing results on the efficiency of the new method are also included. Keywords: parametric curves, knots, polynomials.

1

Introduction

The problem of constructing parametric interpolating curves is of fundamental importance in CAGD, CG, scientific computing and so on. The constructed curve is often required to have a better approximation precision and as well as to have the shape suggested by the data points. The construction of an ideal parametric interpolating curve requires not only a good interpolation method, but also appropriate choice of the parameter knots. In parametric curve construction, the chord length parametrization is a widely accepted and used method to determine knots [1][2]. Other two useful methods are centripetal model[3] and adjusted chord length method ([4], referred as Foley’s method). When these three methods are used, the constructed interpolant can only reproduce straight lines. In paper[5], a new method for determining knots is presented (referred as ZCM method). The knots are determined using a global method. The determined knots can be used to construct interpolants which reproduce parametric quadratic curves if the interpolation scheme reproduces quadratic polynomials. A new method for determining knots is presented in this paper. The knots associated with the points are computed by a local method. The determined knots have a quadratic polynomial precision. Experiments showed that the curves constructed using the knots by the new method generally has the better interpolation precision. The remaining part of the paper is arranged as follows. The basic idea of the new method is described in Section 2. The choice of knots by constructing a Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 130–137, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Determining Knots with Quadratic Polynomial Precision

131

parametric quadratic interpolant to four data points is discussed in Section 3. The comparison of the new method with other four methods is performed in Section 4. The Conclusions and Future Works is given in Section 5.

2

Basic Idea

Let Pi = (xi , yi ), 1 ≤ i ≤ n, be a given set of distinct data points which satisfies the condition that for any point Pi , 1 < i < n, there are at least two sets of four consecutive convex data points, which include it. As an example, for the data points in Figure 3, the point Pi+1 belongs the two sets of consecutive convex data points which are {Pi−2 , Pi−1 , Pi , Pi+1 } and {Pi , Pi+1 , Pi+2 , Pi+3 }, respectively. The goal is to construct a knot ti for each Pi , 1 ≤ i ≤ n. The constructed knots satisfy the following the condition: if the set of data points are taken from a parametric quadratic polynomial, i.e., Pi = Aξi2 + Bξi + C,

1≤i≤n

(1)

where A = (a1 , a2 ), B = (b1 , b2 ) and C = (c1 , c2 ) are 2D points, then ti − ti−1 = α(ξi − ξi−1 ),

1≤i≤n

(2)

for some constant α > 0. Such a set of knots ti , 1 ≤ i ≤ n, is known to have a quadratic polynomial precision. Obviously, using the knots satisfying equation (2), an interpolation scheme which reproduces quadratic polynomials will reproduce parametric quadratic polynomials. Following, the basic idea in determining the knots ti , 1 ≤ i ≤ n, will be described. If the set of data points is taken from a parametric quadratic polynomial, P (ξ) = (x(ξ), y(ξ)) defined by x(ξ) = X2 ξ 2 + X1 ξ + X0 , y(ξ) = Y2 ξ 2 + Y1 ξ + Y0 ,

(3)

then, there is a rotating transformation to transform it to the following parabola form, as shown in Figure 1: y¯ = a1 t2 + b1 t + c1 , x¯ = t

(4)

Then the knots ti , 1 ≤ i ≤ n can be defined by ti = x ¯i ,

i = 1, 2, 3, · · · , n,

which has a quadratic polynomial precision. Assume that the following transformation x ¯ = x cos β2 + y sin β2 y¯ = −x sin β2 + y cos β2 transforms P (ξ) (3) to the parabola (4), then we have the following theorem 1.

132

C. Zhang, X. Ji, and H. Liu

y

 y

2.5 2

P3

P0 1.5 P1

1

 x

P2

0.5 x

1

1

2

3

Fig. 1. A standard parabola in x ¯y¯ coordinate system

Theorem 1. If the set of data points is taken from a parametric quadratic polynomial,P (ξ) (3), then the knot ti , i = 1, 2, 3, · · · , n which have a quadratic polynomial precision, can be defined by t1 = 0 ti = ti−1 + (xi − xi−1 ) cos β2 + (yi − yi−1 ) sin β2 , where

i = 2, 3, · · · , n

 sin β2 = −X2 / X22 + Y22 cos β2 = Y2 / X22 + Y22

(5)

(6)

Proof. In the x ¯y¯ coordinate system, it follows from (3) that x ¯ = (X2 ξ 2 + X1 ξ + X0 ) cos β2 + (Y2 ξ 2 + Y1 ξ + Y0 ) sin β2 y¯ = −(X2 ξ 2 + X1 ξ + X0 ) sin β2 + (Y2 ξ 2 + Y1 ξ + Y0 ) cos β2

(7)

If sin β2 and cos β2 are defined by (6), then the first expression of (7) becomes ξ=

x ¯ + X0 cos β2 + Y0 sin β2 X1 cos β2 + Y1 sin β2

(8)

Substituting (8) into the second expression of (7) and rearranging, a parabola is obtained, which is defined by ¯2 + b1 x ¯ + c1 , y¯ = a1 x where a1 , b1 and c1 are defined by a1 = Y2 cos β2 − X2 sin β2 b1 = −2a1 AB + (Y1 cos β2 − X1 sin β2 )A c1 = a1 A2 B 2 − (Y1 cos β2 − X1 sin β2 )AB + Y0 cos β2 − X0 sin β2 Thus, ti can be defined by x ¯i , i.e., the knot ti ,i = 1, 2, 3, · · · , n can be defined by (5), which has a quadratic polynomial precision.

Determining Knots with Quadratic Polynomial Precision

133

The discussion above showed that the key point of determining knots is to construct the quadratic polynomial, P (ξ) = (x(ξ), y(ξ)) (3) using the given data points. This will be discussed in Section 3.

3

Determining Knots

In this section, we first discuss how to construct a quadratic polynomial with four points, then discuss the determination of knots using the quadratic polynomial. 3.1

Constructing a Quadratic Polynomial with Four Points

Let Qi (ξ) be a parametric quadratic polynomial which interpolates Pi−1 , Pi and Pi+1 . Qi (ξ) can be defined on the interval [0, 1] as follows: Qi (s) = ψ1 (s)(Pi−1 − Pi ) + ψ2 (s)(Pi+1 − Pi ) + Pi

(9)

(s − si )(s − 1) si s(s − si ) ψ2 (s) = 1 − si

(10)

where

ψ1 (s) =

where 0 < si < 1. Expressions (9) and (10) show that four data points are needed to determine a parametric quadratic polynomial uniquely. Let Pj = (xj , yj ), i − 1 ≤ j ≤ i + 2, be four points in which there are no three points on a straight line. The Pi+2 will be used to determine si (10) . Without loss of generality, the coordinates of Pi−1 , Pi , Pi+1 and Pi+2 are supposed to be (0, 1), (0, 0), (1, 0) and (xi+2 , yi+2 ), respectively, as shown in Figure 2. In the xy coordinate system, Qi (s) defined by (9) becomes x = s(s − si )/(1 − si ) y = (s − si )(s − 1)/si

(11)

Let si+2 be the knot associated with the point (xi+2 , yi+2 ). As point (xi+2 , yi+2 ) is on the curve, we have xi+2 = si+2 (si+2 − si )/(1 − si ) yi+2 = (si+2 − si )(si+2 − 1)/si

(12)

It follows from (12) that si+2 = xi+2 + (1 − xi+2 − yi+2 )si

(13)

Substituting (13) into (12), one gets the following equation: s2i + A(xi+2 , yi+2 )si + B(xi+2 , yi+2 ) = 0

(14)

134

C. Zhang, X. Ji, and H. Liu y 6 1 s Pi-1

s Pi+2 s Pi

P s i+1

-

1

x

Fig. 2. Pi+2 is in the dotted region

where

2xi+2 xi+2 + yi+2 (1 − xi+2 )xi+2 B(xi+2 , yi+2 ) = (1 − xi+2 − yi+2 )(xi+2 + yi+2 ) > 1, the root of (14) is  xi+2 yi+2 1 si = ) ( xi+2 + xi+2 + yi+2 xi+2 + yi+2 − 1 A(xi+2 , yi+2 ) = −

As si+2

(15)

It follows from (9)−(10) that if the given data points are taken from a parametric quadratic polynomial Q(t), then there is an unique si satisfying 0 < si < 1 to make the curve Qi (s) (9) pass through the given data points. Since si is determined uniquely by (15), Qi (s) is equivalent to Q(t). Substituting si+2 > 1 into (11) one obtains xi+2 > 1 and yi+2 > 0,

(16)

that is, the point (xi+2 , yi+2 ) should be on the dotted region in Figure 2. 3.2

Determining Knots

After si being determined, Qi (s) (9) can be written as

where

xi (s) = Xi,2 s2 + Xi,1 s + Xi,0 , yi (s) = Yi,2 s2 + Yi,1 s + Yi,0 ,

(17)

xi−1 − xi xi+1 − xi + si 1 − si (xi−1 − xi )(si + 1) (xi+1 − xi )si Xi,1 = − − si 1 − si Xi,0 = xi−1 yi−1 − yi yi+1 − yi Yi,2 = + si 1 − si (yi−1 − yi )(si + 1) (yi+1 − yi )si Yi,1 = − − si 1 − si Yi,0 = yi−1

(18)

Xi,2 =

Determining Knots with Quadratic Polynomial Precision

135

It follows from Theorem 1 that for i = 2, 3, · · · , n−2, the knot interval tj+1 −tj = Δij between Pj and Pj+1 , j = i − 1, i, i + 1 can be defined by Δij = (xj+1 − xj ) cos βi + (yj+1 − yj ) sin βi ,

j = i − 1, i, i + 1

(19)

where cos βi and sin βi are defined by (please see (6))  2 +Y2 sin βi = −Xi,2 / Xi,2 i,2  2 2 cos βi = Yi,2 / Xi,2 + Yi,2 Based on the definition (19) that for the pair of P1 and P2 , there is one knot interval Δ21 ; for the pair of P2 and P3 , there are two knot intervals, Δ22 and Δ32 ; for the pair of Pi and Pi+1 , 3 ≤ i ≤ n − 2, there are three knot intervals, Δi−1 , i ; the knot intervals for the pair of P and P , j = n − 1, n are Δii and Δi+1 j−1 j i similar. Now the knot interval Δi for the pair of Pi and Pi+1 , i = 1, 3, · · · , n − 1 are defined by Δ1 = Δ21 Δ2 = Δ22 + δ22 Δi = Δii + 2δi1 δi2 /(δi1 + δi2 ), i = 3, 4, · · · , n − 3 1 Δn−2 = Δn−2 n−2 + δn−2 n−2 Δn−1 = Δn−1 where

(20)

δi1 = |Δii − Δi−1 | i δi2 = |Δii − Δi+1 | i

If the given set of data points are taken from a parametric quadratic polynomial, 1 2 1 2 1 δi−1 /(δi−1 + δi−1 ) and δn−2 are correction then δi1 = δi2 = 0. The terms δ22 , 2δi−1 n−2 2 i to Δ2 , Δi , i = 4, 5, · · · , n − 2 and Δn−2 , respectively.

Pi1

Pi2

Pi3

Pi

Pi1

Pi2

Fig. 3. Example 1 of the data points

For the data points as shown in Figure 3, as the data points change its convexity, the knot interval between Pi and Pi+1 is defined by the combination of Δii−1 and Δii+1 , i.e, by (21) Δi = (Δii−1 + Δii+1 )/2. While for data points as shown in Figure 4, the knot intervals are determined by subdividing the data points at point Pi+1 into two sets of data points. The first

136

C. Zhang, X. Ji, and H. Liu

Pi

Pi2 Pi1

Fig. 4. Example 2 of the data points

set of data point ends at Pi+1 , while the second set of data point starts at Pi+1 . If Pi−1 , Pi and Pi+1 are on a straight line, then setting ti − ti−1 = |Pi−1 Pi |, ti+1 − ti = |Pi Pi+1 |, this determination makes the quadratic polynomial Qi (t) which passes Pi−1 , Pi and Pi+1 be a straight line with the magnitude of the first derivative being a constant. Such a straight line is the most well defined curve one can get in this case.

4

Experiments

The new method has been compared with the chord length, centripetal, Foley and ZCM’s methods. The comparison is performed by using the knots computed using these methods in the construction of a parametric cubic spline interpolant. For brevity, the cubic splines produced using these methods are called chord spline, centripetal spline, Foley’s spline, ZCM spline and new spline, respectively. The data points used in the comparison are taken from the following ellipse x = x(τ ) = 3 cos(2πτ ) y = y(τ ) = 2 sin(2πτ )

(22)

The comparison is performed by dividing the interval [0, 1] into 36 sub-intervals to define data points, i.e., τi is defined by τi = (i + σ sin((36 − i) ∗ i))/36

i = 0, 1, 2, · · · , 36

where 0 ≤ σ ≤ 0.25. To avoid the maximum error occurred near the end points (x0 , y0 ) and (x20 , y20 ), the tangent vectors of F (τ ) at τ = 0 and τ = 1 are used as the end conditions to construct the cubic splines. The five methods are compared by the absolute error curve E(t), defined by E(t) = min{|P (t) − F (τ )|} = min{|Pi (t) − F (τ )|, τi ≤ τ ≤ τi+1 }

i = 0, 1, 2, · · · , 19

where P (t) denotes one of the chord spline, centripetal spline, Foley’s spline, ZCM’s spline or new spline, Pi (t) is the corresponding part of P (t) on the subinterval [ti , ti+1 ], and F (τ ) is defined by (22). For the point P (t), E(t) is the shortest distance from curve F (τ ) to P (t) .

Determining Knots with Quadratic Polynomial Precision

137

Table 1. Maximum absolute errors Error σ = .0 σ = .05 σ = .10 σ = .15 σ = .20 σ = .25

chord centripetal Foley ZCM New 5.29e-5 5.29e-5 5.29e-5 5.29e-5 5.29e-5 1.67e-4 3.71e-3 2.39e-3 1.58e-4 1.60e-4 3.17e-4 8.00e-3 5.33e-3 2.93e-4 2.89e-4 5.08e-4 1.30e-2 8.88e-3 4.58e-4 4.37e-4 7.41e-4 1.86e-2 1.31e-2 6.55e-4 6.04e-4 1.02e-3 2.49e-2 1.79e-2 8.86e-4 7.88e-4

The maximum values of the error curve E(t) generated by these methods are shown in table 1. The five methods have also been compared on data points which divide [0, 1] into 18, 72, ... etc subintervals. The results are basically similar as those shown in tables 1.

5

Conclusions and Future Works

A new method for determining knots in parametric curve interpolation is presented. The determined knots have a quadratic polynomial precision. This means that from the approximation point of view, the new method is better than the chord length, centripetal and Foley’s methods in terms of error evaluation in the associated Taylor series. The ZCM’s method has also a quadratic polynomial precision, but it is a global method, while the new method is a local one. The new method works well on the data points whose convexity does not change sign, our next work is to extend it to work on the data points whose convexity changes sign. Acknowledgments. This work was supposed by the National Key Basic Research 973 program of China(2006CB303102), the National Natural Science Foundation of China(60533060, 60633030).

References 1. Ahlberg, J. H., Nilson, E. N. and Walsh, J. L., The theory of splines and their applications, Academic Press, New York, NY, USA, 1967. 2. de Boor, C., A practical guide to splines, Springer Verlag, New York, 1978. 3. Lee, E. T. Y., Choosing nodes in parametric curve interpolation, CAD, Vol. 21, No. 6, pp. 363-370, 1989. 4. Farin G., Curves and surfaces for computer aided geometric design: A practical guide, Academic Press, 1988. 5. Zhang, C., Cheng, F. and Miura, K., A method for determing knots in parametric curve interpolation, CAGD 15(1998), pp 399-416.

Interactive Cartoon Rendering and Sketching of Clouds and Smoke Eduardo J. Álvarez1, Celso Campos1, Silvana G. Meire1, Ricardo Quirós2, Joaquin Huerta2, and Michael Gould2 1

2

Departamento de Informática, Universidad de Vigo, Spain [email protected] Departamento de Lenguajes y Sistemas Informáticos, Universitat Jaume I, Spain {quiros, huerta, gould }@lsi.uji.es

Abstract. We present several techniques to generate clouds and smoke with cartoon style and sketching obtaining interactive speed for the graphical results. The proposed method allows abstracting the visual and geometric complexity of the gaseous phenomena using a particle system. The abstraction process is made using implicit surfaces, which are used later to calculate the silhouette and obtain the result image. Additionally, we add detail layers that allow improvement of the appearance and provide the sensation of greater volume for the gaseous effect. Finally, we also include in our application a simulator that generates smoke animations.

1 Introduction The automatic generation of cartoons requires the use of two basic techniques in expressive rendering: a specific illumination model for this rendering style and the visualization of the objects silhouettes. This style is known as “Cartoon rendering” and its use is common in the production of animation films and in the creation of television contents. Cartoon rendering techniques in video games is also growing as they can produce more creative details than the techniques based on realism. There are several techniques to automatically calculate silhouette -outline- and celshading [1][2][3]. Shadowing and self-shadowing, along with the silhouettes, are fundamental effects for expressing volume, position and limits of objects. Most of these techniques require general meshes and they do not allow representation of amorphous shapes, which are modeled by particle systems as in the case of clouds and smoke. Our objective is to create cartoon vignettes for interactive entertainment applications, combining cartoon techniques with a particle system simulator that allows representation of amorphous shapes such us clouds and smoke. Special attention should be paid to the visual complexity of this type of gaseous phenomena, therefore we use implicit surfaces in order to abstract and simplify this complexity [4][5]. To obtain the expressive appearance, we introduce an algorithm that enhances silhouette visualization, within a cartoon rendering. For the simulation of smoke, we use a particle system based on Selle’s [6] hybrid model. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 138–145, 2007. © Springer-Verlag Berlin Heidelberg 2007

Interactive Cartoon Rendering and Sketching of Clouds and Smoke

139

2 Previous Work Clouds are important elements in the modeling of natural scenes, both if we want to obtain high quality images or for interactive applications. Clouds and smoke are gaseous phenomena very complicated to represent because of several issues: their fractal nature, the intrinsic difficulty of its animation and local illumination differences. The representation of cloud shapes has been treated by three different strategies: volumetric clouds (explicit form [7] or procedural [8]), using billboards [9] [10], and by general surfaces [12][13]. The approach based on volume, in spite of the improvements of graphics hardware, is not yet possible at interactive speed because of the typical scene size and the level of detail required to represent the sky. The impostors and billboards approach is the most widely used solution in video games and, although the results are suitable, their massive use slows the visualization due to the great number of pixels that must be rendered. On the other hand, the use of general surfaces allows efficient visualization however it generates overly coarse models for representing volumetric forms. Bouthors [11], extends Gardner’s model [12][13] by using a hierarchy of almost-spherical particles related to an implicit field that define a surface. This surface is later rendered to create a volumetric characteristic that provides realistic clouds. In expressive rendering, the relevant works on gaseous phenomena are scarce in the literature. The first works published in this field are from Di Fiore [14] and Selle [6] trying to create streamlined animations of these phenomena. The approach of Di Fiore combines a variant of second order particle systems to simulate the gaseous effect movement using 2D billboards drawn by artists, which are called ‘basic visualization components’. Selle introduces a technique that facilitates the animation of cartoon rendering smoke. He proposes to use a particle system whose movement is generated with the method presented by Fedkiw [15] for the simulation of photorealistic smoke. To achieve the expressive appearance, each particle is rendered as a disc in the depth buffer creating a smoke cloud. In a second iteration of the algorithm, the silhouette of the whole smoke cloud is calculated reading the depth buffer and applying the depth differences. This method obtains approximately one image per second and has been used by Deussen [16] for the generation of illustrations of trees. McGuire [17], presents an algorithm for the real-time generation of cartoon rendering smoke. He extends Selle’s model incorporating shading, shadows, and nailboards (billboards with depth maps). Nailboards are used to calculate intersections between smoke and geometry, and to render the silhouette without using the depth buffer. The particle system is based on work recently presented by Selle, Rasmussen, and Fedkiw [18], which introduces a hybrid method that generates synergies using Lagrangian vortex particle methods and Eulerian grid based methods.

3 Scene Modeling The rendering process necessarily requires an abstraction and simplification of the motif. This is made evident in the generation of hand-drawn sketches, even more so when representing gases. By means of several strokes the artist adds detail to the

140

E.J. Álvarez et al.

scene creating a convincing simplification of the object representation which can be easily recognized by the viewer. Our method provides the user with complete freedom to design the shape and the aspect (appearance) of the cloud. In a first approach, we propose the possibility to model clouds as static elements in the scene, the same way it normally happens in animation films. The process of modeling clouds begins with the definition by the user of particles pi that comprise the cloud, each one having a center ci, a radius ri and a mass mi. Once the set of particles is defined we perform the simplification and the abstraction of the geometric model of clouds. To calculate the implicit surface described by the total particle set, we use the function of density proposed by Murakami and Ichihara [19], and later used by Luft and Deussen[5] for the real-time illustration of plants with Watercolor style. The influence of a particle pi in a point q is described by a density function Di (q) defined as: ⎛ ⎜ ⎜ ⎜ ⎜ ⎝

⎛ ⎜ ⎜ ⎜ ⎝

|| q - ci || ⎞⎟ Di(q)= 1 ⎟ ⎟ ri ⎠

2 ⎞2 ⎟ ⎟ ⎟ ⎟ ⎠

.

(1)

For ||q - ci|| 0 and the “ideal value” of Σi βi be β* > 0. If -Σi αi > α*, we define the regret measure as –dα+ = Σi αi + α*; otherwise, it is 0. If - Σi αi < α*, the regret measure is defined as dα- = α* + Σi αi; otherwise, it is 0. Thus, we have (i) α*+ Σi αi = d α- - dα+, (ii) | α* + Σi αi | = dα- + dα+, and (iii) dα- , dα+ ≥ 0.Similarly, we have β* - Σi βi = d β- - d β+, | β* - Σi βi| = dβ- + dβ+, and dβ- , dβ+ ≥ 0. So M1 can be changed into M2. M inim ize d α- + d α+ + d β- + d βS u bject to :

α *+ ∑ i α i = d α- − d α+

(3)

β * - ∑ i β i = d β- − d β+

Ai X = b +α i - β i , ∀ A i ∈ G 1 , Ai X = b -α i + β i , ∀ A i ∈ G 2 .

Where Ai, α*, and β* are given, X and b are unrestricted, and αi , βi , dα-, dα+ , dβ- , dβ+ ≥ 0. If b is given and X is found, we can classify a labeled or unlabeled by using linear discriminant AX. The standard two-class MCLP algorithm is based on the M2. It uses the idea of linear programming to determine a boundary separating classes. Comparing with other classification tools, it’s simple and direct, free of the statistical assumptions, flexible in defining and modifying parameters, and high on classification accuracy [5]. It has been widely used in business fields, such as credit card analysis, fraud detecting, and so on. But faced with high dimension data or too much size of data, sometimes its computing efficiency is low. 2.2 Progressive Sampling Progressive sampling [7] is a famous dynamic sampling method that can be used in large datasets. It attempts to maximize model accuracy as efficiently as possible. It is based on the fact that when the sample size is large enough, with the further increase

404

M. Zhu et al.

of sample size, the model accuracy doesn’t increase significantly. It is in fact a tradeoff between classification accuracy and computing cost. Because of dynamic interactions of sampling process, it overcomes the weakness of one-off (static) sampling. It starts with a small sample n0 and augment sample amount progressively at each step. At each step, a model is built from the current sample and is evaluated. If the resulting model has not reached user-specified accuracy threshold, the algorithm must operate once again. Generally, the relation between accuracy and corresponding sample size can be expressed in learning curve (Figure 1). The convergence nmin in the curve is the approximately optimal sample size. In progressive sampling, two key aspects affecting sampling effectivity and efficiency are increasing mechanism of sample size and stopping or convergence criterion of sampling . As for the first aspect, [6] proposed an arithmetic increasing schedule as

S a = { n0 ,n1 ,n2 ,...,nk } = { n0 ,n0 + nδ ,n0 + 2nδ ,...,n0 + k ⋅ nδ } . [7] presented a geometrical increasing schedule as

S g = { n0 ,n1 ,n2 ,...,nk } = { n0 ,a ⋅ n0 ,a 2 ⋅ n0 ,...,a k ⋅ n0 } . [6] drew a conclusion that arithmetic sampling is more efficient that one-off sampling, and [7] verified geometrical sampling is more efficient than arithmetic sampling.

Fig. 1. Accuracy vs. sample size

2.3 Classification Committee To improve prediction performance of an individual classifier, classification committee techniques are often used in many classification methods. There are many researches on committee/multi classifier/ensemble. The most popular guidelines for combining all individual classifiers are Bagging[8], Boosting[9],Random Subspace[10], and Random Forest[11]. There have been many variations of the above guidelines. The combination rules mainly include simple majority (mainly for Bagging), weighted majority (mainly for Boosting), minimum, maximum, product, average, Naive Bayes and so on[12]. Bagging is based on bootstrapping [13] and aggregating concepts, so it incorporates the benefits of both approaches. In Bagging, m random independent bootstrap samples with the same size n(≤N) are taken out from the original dataset N, and then m individual classifiers are developed, and finally a classification committee

A Dynamic Committee Scheme on MCLP Classification Method

405

is made by aggregating these m member classifiers using a simple majority vote. Bagging can be executed in parallel way, thus it is efficient.

3 A Dynamic Committee Scheme on MCLP Classification Method In this paper, we want to explore the approximately optimal sample size nmin for MCLP classification method with respect to a given dataset. For simplicity, our research only focuses on two-class problems. In our scheme, a committee is integrated by m individual classifiers. The combination mechanism is a variation of Bagging technique, which takes out samples in a non-repetition way from original dataset N with ni K), where IP(S(T ) > K) is the risk-neutral probability of ending up in-the-money and Δ is the delta of the option, the sensitivity of the option with respect to changes in the stock price. Both IP(S(T ) > K) and Δ can be recovered by inverting techniques, e.g., by Gil-Palaez inversion [8]. Carr and Madan [4] considered another approach by directly taking the Fourier transform of the damped option price with respect to k, the logarithm of the strike price. Premultiplying the option price with a damping function exp (αk) to ensure the existence of the Fourier transform, Carr and Madan ended up with    e−rτ φ(u − (α + 1)i) αk −rτ ,(3) eiuk E (S(T ) − ek )+ dk = F{e V (t, k)}= e −(u − αi)(u − (α + 1)i) IR where i the imaginary unit, k is the logarithm of the strike price K and   φ is the characteristic function of the log-underlying, i.e., φ(u) = E eiu ln S(T ) . The methods considered up till here can only handle the pricing of European options. 1

Throughout the paper we assume that interest rates are deterministic, this assumption can be relaxed at the cost of increasing the dimensionality of some of the methods.

A Fast Method for Pricing Early-Exercise Options with the FFT

417

Define the set of exercise dates as T = {t0 , . . . , tM } and assume the exercise dates are equally spaced: tk+1 − tk = Δt. The best known examples of early exercise options are the American and Bermudan options. American options can be exercised at any time prior to the option’s expiry; Bermudan options can only be exercised at certain dates in the future. The Bermudan option price can then be found via backward induction as  C(tk , S(tk )) = e−rΔt E [V (tk+1 , S(tk+1 ))] k = M − 1, . . . , 0, (4) V (tk , S(tk )) = max{C(tk , S(tk )), E(tk , S(tk ))}, where C denotes the continuation value of the option and V is the option value on the very next exercise date. Clearly the dynamic programming problem in (4) is a successive application of the risk-neutral valuation formula, as we can write the continuation value as  −rΔt V (tk+1 , y)f (y|S(tk ))dy, (5) C(tk , S(tk )) = e IR

where f (y|S(tk )) represents the probability density describing the transition from S(tk ) at tk to y at tk+1 . Based on (4) and (5) the QUAD method was introduced in [1]. The method requires that the transition density is known in closed-form. This requirement is relaxed in [12], where the QUAD-FFT method is introduced and the underlying idea is that the transition density can be recovered by inverting the characteristic function. But the overall complexity of both methods is O(M N 2 ) for an M -times exercisable Bermudan option with N grid points used to discretize the price of the underlying asset.

3

The CONV Method

One of the refining properties of a L´evy process is that its increments are independent of each other, which is the main premise of the CONV method: f (y|x) = f (y − x).

(6)

Note that x and y do not have to represent the asset price directly, they could be monotone functions of the asset price. The assumption made in (6) therefore certainly holds when the asset price is modeled as a monotone function of a L´evy process, since one of the defining properties of a L´evy process is that its increments are independent of each other. In this case x and y in (6) represent the log-spot price. By including (6) in (5) and changing variables z = y − x the continuation value can be expressed as  ∞ −rΔt V (tk+1 , x + z)f (z)dz, (7) C(tk , x) = e −∞

which is a cross-correlation of the option value at time tk+1 and the density f (z). If the density function has an easy closed-form expression, it may be beneficial

418

R. Lord et al.

to compute the integral straight forwardly. However, for many exponential L´evy models we either do not have a closed-form expression for the density (e.g. the CGMY/KoBoL model of [3] and many EAJD models), or if we have, it involves one or more special functions (e.g. the Variance Gamma model). Since the density is hard to obtain, let us consider taking the Fourier transform of (7). In the remainder we will employ the following definitions for the continuous Fourier transform and its inverse,  ∞ ˆ e−iut h(t)dt, (8) h(u) := F {h(t)}(u) = −∞  ∞ 1 ˆ ˆ = eiut h(u)du. h(t) := F −1 {h(u)}(t) (9) 2π −∞ If we dampen the continuation value (7) by a factor exp (αx) and subsequently take its Fourier transform, we arrive at  ∞  ∞ erΔt F {eαx C(tk , x)}(u) = e−iux eαx V (tk+1 , x + z)f (z)dzdx. (10) −∞

−∞

Changing the order of the integrals and the variables by x = y − z, we obtain  ∞ ∞ rΔt αx e F {e C(tk , x)}(u) = e−i(u+iα)y V (tk+1 , y)dy ei(u+iα)z f (z)dz −∞ −∞  ∞  ∞ −i(u+iα)y e V (tk+1 , y)dy ei(u+iα)z f (z)dz = −∞

−∞

= F {eαy V (tk+1 , y)}(u) φ(u + iα).

(11)

In the last step we used the fact that the complex-valued Fourier transform of the density is simply the extended characteristic function  ∞ ei(x+yi)z f (z)dz, (12) φ (x + yi) = −∞

which is well-defined when φ(−yi) < ∞, as |φ(x+yi)| ≤ |φ(−yi)|. Inverse Fourier transform and undamping on (11) yield the CONV formula:   C(tk , x) = e−rΔt−αx F −1 F {eαy V (tk+1 , y)}(u) · φ(u + iα) . (13) To value Bermudan options, one can recursively call (13) and (4) backwards in time: First recover the option values on the last early-exercise date; then feed them into (13) and (4) to obtain the option values on the second last early-exercise date; · · ·, continue the procedure till the first early-exercise date is reached; for the last step, feed the option value on the first early-exercise date into (13) and there we obtain the option values on the initial date. To value American options, there are two routes to follow: they can be approximated either by Bermudan options with many early exercise dates or by Richardson extrapolation based on only a series of Bermudan options with an

A Fast Method for Pricing Early-Exercise Options with the FFT

419

increasing number of early exercise dates. In the experiments we evaluated both approaches and compared their CPU time and the accuracy. As for the approach via Richardson extrapolation, our choice of scheme is the one proposed by Chang, Chung, and Stapleton [5].

4

Implementation

Let’s ignore the damping factor in this section, for the ease of analysis, and simplify the notations as: e−rΔt C(x, tk ) → C(x) and V (y, tk+1 ) → V (y). lies in TL :=  Suppose  that we are only interested in a portion of C(x)Athat  − L2 , L2 . Assume that f (z) ≈ 0 for z outside TA := − A 2 , 2 . Both L and A denote positive real numbers. Then we may re-write the risk-neutral valuation formula as   V (x + z)f (z)dz = V (x + z)f (z)dz, (14) C(x) = TA

IR

which indicates that if values of C(x) arewanted on TL then values of V (y) that L+A we need for computation lie in TA+L := − L+A . 2 , 2 Remark 1 (Value of A). When working in the log-stock domain (e.g. x := log(S)), we approximate the standard deviation of the density function by the volatility of its characteristic function, therefore approximate A by 10 times volatility. The approximation gives good results in series of experiments. 4.1

Discrete CONV Formula

Recall that functions on compact supports can be represented by their Fourier series, it then follows that we may rewrite V (y) as   2π 2π 1 V (y)e−ik A+L y dy. (15) vk eik A+L y , with vk = V (y) = A + L TA+L k∈ZZ Substitute the Fourier series of V (y) in (14) and interchange the summation and the integration (allowed by Fubini’s theorem) to result in C(x) =



 vk

k∈ZZ

2π 2π f (z)eik A+L z dz eik A+L x ,

(16)

TA

where the integration inside the brackets is precisely the definition of the char2π . Truncate the series in (16) to yield acteristic function at u = k A+L C(x) =

 k∈ZN



2π 2π vk · φ k · eik A+L x , A+L

(17)

420

R. Lord et al.

where ZN = {n| − N2 ≤ n < N2 , ∈ ZZ}. Up to this point, (17) is almost ready for the implementation, were vk to be obtained numerically as well. To recover vk , quadrature rules are employed. With composite mid-point rule one obtains v˜k =

2π Δy  −ik L+A yj e V (yj ), L+A

(18)

j∈ZN

where Δy = L+A N , {yj := jΔy + yc |j ∈ ZN } and yc denotes the grid center. It then yields the discrete version of the CONV formula after substituting (18) into (17): Cm =

 1  iuk xm e φ(uk ) e−iuk yj V (yj ), N k∈ZN

(19)

j∈ZN

2π where uk = k L+A and {xm := mΔy + xc |m ∈ ZN } with grid center xc . Note that the x- and y-grids share the same mesh size so that the same u-grid can be used in both the inner and the outer summations.

4.2

Computational Complexity and Convergence Rate

The pleasing feature of (19) is that both summations can be fast resolved by existing FFT algorithms. Therefore, the overall computational complexity is O(N log(N )) for European options, and O(M N log(N )) for an M -times exercisable Bermudan options. In the mean while, it can be proven analytically that the convergence rate of the method is O( N12 ) for both vanilla European and Bermudan options. Though we’ll not include the error analysis in this paper, the regular point-wise convergence rate of the method can be well observed in the experiment results.

5

Numerical Results

By various numerical experiments we aim to show the speed of the computations and the flexibility of the CONV method. Three underlying models are adopted in the experiments: Geometric Brownian Motion (GBM), Variance Gamma (VG), and CGMY. The pricing problems are of Bermudan and American style. The computer used for the experiments has a Intel Pentium 4 CPU, 2.8 GHz frequency and a total 1011 MB physical memory. The code is programmed in Matlab. Results for 10-times exercisable Bermudan options under GBM and VG are summarized in table 1, where the fast computational speed (e.g. less than 1 second for N = 216 ), the high accuracy (e.g. with only 26 grid points the error is already of level 10−2 ) and the regular convergence rate (e.g. the convergence rate is 4 for Bermudan payoff) are shown. Results for American options under VG and CGMY are summarized in table 2, where ‘P(N/2)’ denotes the results obtained by approximating the American option values directly by N/2-times exercisable

A Fast Method for Pricing Early-Exercise Options with the FFT

421

Table 1. CPU time, errors and convergence rate in pricing 10-times exercisable Bermudan put under GBM and VG with the CONV method GBM: Reference= 10.4795201; VG: Reference= 9.04064611; N = 2d time(sec) absolute error convergence time(sec) absolute error convergence 6 0.002 9.54e-02 0.001 7.41e-02 7 0.002 2.44e-02 3.91 0.002 5.42e-03 1.37 0.003 6.45e-03 3.78 0.003 2.68e-03 2.02 8 0.010 1.69e-03 3.81 0.006 6.10e-04 4.39 9 0.011 4.47e-04 3.79 0.015 1.38e-04 4.40 10 0.021 1.12e-04 3.97 0.022 3.16e-05 4.38 11 0.043 2.83e-05 3.97 0.042 7.92e-06 3.99 12 0.091 7.09e-06 4.00 0.096 1.99e-06 3.97 13 0.210 1.76e-06 4.04 0.208 5.15e-07 3.88 14 For GBM: S0 = 100, K = 110, T = 1, σ = 0.2, r = 0.1, q = 0; For VG: S0 = 100, K = 110, T = 1, σ = 0.12, θ = −0.14, ν = 0.2, r = 0.1, q = 0; Reference values are obtained by the PIDE method with 4 million grid points. Table 2. CPU time, errors and convergence rate in pricing 10-times exercisable Bermudan put under VG and CGMY with the CONV method

VG: Reference= 0.800873607 P(N/2) Richardson N = 2d time(sec) error time(sec) error 7 0.01 4.61e-02 0.03 4.51e-02 0.04 6.47e-03 0.05 1.36e-02 8 0.11 6.78e-03 0.07 2.69e-03 9 0.45 5.86e-03 0.14 1.43e-03 10 1.73 2.87e-03 0.28 2.71e-04 11 7.18 1.03e-03 0.57 5.76e-05 12

CGMY(Y < 1) CGMY(Y > 1) Reference= Reference= 0.112171 [2] 9.2185249 [13] Richardson Richardson time(sec) error time(sec) error 0.02 1.37e-02 0.02 5.68e-01 0.04 2.08e-03 0.04 2.78e-01 0.07 4.83e-04 0.08 1.29e-01 0.12 9.02e-05 0.14 8.68e-03 0.26 4.21e-05 0.28 6.18e-04 0.55 2.20e-05 0.59 6.14e-03

For VG: S0 = 100, K = 90, T = 1, σ = 0.12, θ = −0.14, ν = 0.2, r = 0.1, q = 0; Reference value from PIDE implementation with about 65K × 16K grid points For CGMY(Y < 1): Y = 0.5, C = 1, G = M = 5, S0 = 1, K = 1, T = 1, r = 0.1, q = 0; For CGMY(Y > 1): Y = 1.0102, C = 0.42, G = 4.37, M = 191.2, S0 = 90, K = 98, T = 0.25, r = 0.06, q = 0;

Bermudan options, and ‘Richardson’ denotes the results obtained by the 6-times repeated Richardson extrapolation scheme. For the VG model, the extrapolation method turns out to converge much faster and spend far less time than the direct approximation approach (e.g., to get the same 10−4 accuracy, the extrapolation method is more than 20 times faster than the direct-approximation method). For CGMY model, results by the extrapolation approach are given. They demonstrate that the CONV method can be well combined with the extrapolation technique as well as any models with known characteristic functions.

422

6

R. Lord et al.

Conclusions and Future Works

The CONV method, like other FFT-based methods, is quite flexible w.r.t the choice of asset process and also the type of option contract. It can be applied if the underlying follows a L´evy processe and its characteristic function is known. The CONV method is highly accurate and fast in pricing Bermudan and American options. It can be used for fast option pricing and for parameter calibration purposes. The future works include thorough error analysis and application of the method to exotic options. Generalization of the method to high-dimensions and incorporation of the method with sparse grid method are also of our great interest.

References 1. Andricopoulos, A.D., Widdicks, M., Duck, P.W. and Newton, D.P.: Universal Option Valuation Using Quadrature, J. Financial Economics, 67(2003),3: 447-471 2. Almendral, A. and Oosterlee, C.W.: Accurate Evaluation of European and American Options Under the CGMY Process., to appear in SIAM J. Sci. Comput(2006) 3. Boyarchenko, S. I. and Levendorski˘ı, S. Z.: Non-Gaussian Merton-BlackScholes theory, vol. 9 of Advanced Series on Statistical Science & Appl. Probability, World Scientific Publishing Co. Inc., River Edge, NJ, 2002 4. Carr, P. P. and Madan, D. B.: Option valuation using the Fast Fourier Transform, J. Comp. Finance, 2 (1999), pp. 61–73 5. Chang, C-C , Chung, S-L and Stapleton, R.C.: Richardson extrapolation technique for pricing American-style options Proc. of 2001 Taiwanese Financial Association, Tamkang University Taipei, June 2001. Available at http://papers.ssrn. com/sol3/papers.cfm?abstract id=313962 6. Cont, R. and Tankov, P.: Financial modelling with jump processes, Chapman & Hall, Boca Raton, FL, 2004 7. Duffie, D., Pan, J. and Singleton, K.: Transform analysis and asset pricing for affine jump-diffusions. Econometrica 68(2000): 1343–1376 8. Gil-Pelaez, J.: Note on the inverse theorem. Biometrika 37(1951): 481-482 9. Heston, S.: A closed-form solution for options with stochastic volatility with applications to bond and currency options, Rev. Financ. Stud., 6 (1993), pp. 327–343. 10. Hirsa, A. and Madan, D. B.: Pricing American Options Under Variance Gamma, J. Comp. Finance, 7 (2004). 11. Matache, A. M., Nitsche, P. A. and Schwab, C.: Wavelet Galerkin pricing of American options on L´ evy driven assets, working paper, ETH, Z¨ urich, 2003. 12. O’Sullivan, C.: Path Dependent Option Pricing under Levy Processes EFA 2005 Moscow Meetings Paper, Available at SSRN: http://ssrn.com/abstract=673424, Febr. 2005. 13. Wang, I., Wan, J.W. and Forsyth, P. : Robust numerical valuation of European and American options under the CGMY process. Techn. Report U. Waterloo, Canada, 2006.

Neural-Network-Based Fuzzy Group Forecasting with Application to Foreign Exchange Rates Prediction Lean Yu1,2, Kin Keung Lai2, and Shouyang Wang 1 1

Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100080, China {yulean,sywang}@amss.ac.cn 2 Department of Management Sciences, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong {msyulean,mskklai}@cityu.edu.hk

Abstract. This study proposes a novel neural-network-based fuzzy group forecasting model for foreign exchange rates prediction. In the proposed model, some single neural network models are first used as predictors for foreign exchange rates prediction. Then these single prediction results produced by each single neural network models are fuzzified into some fuzzy prediction representations. Subsequently, these fuzzified prediction representations are aggregated into a fuzzy group consensus, i.e., aggregated fuzzy prediction representation. Finally, the aggregated prediction representation is defuzzified into a crisp value as the final prediction results. For illustration and testing purposes, a typical numerical example and three typical foreign exchange rates prediction experiments are presented. Experimental results reveal that the proposed model can significantly improve the prediction performance for foreign exchange rates. Keywords: Artificial neural networks, fuzzy group forecasting, foreign exchange rates prediction.

1 Introduction Foreign exchange rate forecasting has been a common research stream in the last few decades. Over this time, the research stream has gained momentum with the advancement of computer technologies, which have made many elaborate computation methods available and practical [1]. Due to its high volatility, foreign exchange rates forecasting is regarded as a rather challenging task. For traditional statistical methods, it is hard to capture the high volatility and nonlinear characteristics hidden in the foreign exchange market. As a result, many emerging artificial intelligent techniques, such as artificial neural networks (ANN), were widely used in foreign exchange rates forecasting and obtained good prediction performance. For example, De Matos [2] compared the strength of a multilayer feed-forward neural network (FNN) with that of a recurrent network based on the forecasting of Japanese yen futures. Kuan and Liu [3] provided a comparative evaluation of the performance of MLFN and a recurrent Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 423–430, 2007. © Springer-Verlag Berlin Heidelberg 2007

424

L. Yu, K.K. Lai, and S. Wang

neural network (RNN) on the prediction of an array of commonly traded exchange rates. In the article of Tenti [4], the RNN is directly applied to exchange rates forecasting. Hsu et al. [5] developed a clustering neural network (CNN) model to predict the direction of movements in the USD/DEM exchange rate. Their experimental results suggested that their proposed model achieved better forecasting performance relative to other indicators. In a more recent study by Leung et al. [6], the forecasting accuracy of MLFN was compared with the general regression neural network (GRNN). The study showed that the GRNN possessed a greater forecasting strength relative to MLFN with respect to a variety of currency exchange rates. Similarly, Chen and Leung [7] adopted an error correction neural network (ECNN) model to predict foreign exchange rates. Yu et al. [8] proposed an adaptive smoothing neural network (ASNN) model by adaptively adjusting error signals to predict foreign exchange rates and obtained good performance. However, neural networks are a kind of very unstable learning paradigm. Usually, small changes in the training set and/or parameter selection can produce large changes in the predicted output. To remedy the drawbacks, this paper attempts to utilize a set of neural network predictors to construct a fuzzy group forecasting methodology for foreign exchange rates prediction. In the proposed model, a number of single neural network models are first used as predictors for foreign exchange rates prediction. Then these single prediction results produced by each single neural network models are fuzzified into some fuzzy prediction representations. Subsequently, these fuzzified prediction representations are aggregated into a fuzzy group consensus, i.e., aggregated fuzzy prediction representation. Finally, the aggregated fuzzy prediction representation is defuzzified into a crisp value as the final prediction results. The major aim of this study is to present a new forecasting paradigm called fuzzy group forecasting that can significantly improve the prediction capability of neural networks. The rest of this study is organized as follows. Section 2 describes the proposed neural network based fuzzy group forecasting model in detail. For further illustration, a typical numerical example and three typical foreign exchange rates prediction experiments are presented in Section 3. Section 4 concludes the study.

2 Neural-Network-Based Fuzzy Group Forecasting Methodology In this section, a neural-network-based fuzzy group forecasting model is proposed for time series prediction problem. The basic idea of the fuzzy group forecasting model is to make full use of group member’s knowledge to make a more accurate prediction over any single neural network predictors. For convenience of simplification, this study utilizes three group feed-forward neural network members to construct a fuzzy group forecasting model. Generally speaking, the neural-network-based fuzzy group forecasting consists of four different steps. Step I: Single Neural Predictor Creation. In order to construct different single neural predictor for the neural network model with the same structure, we use different training sets to create different neural network predictors. In this study the bagging algorithm proposed by Breiman [9] is used to generate different training sets.

Neural-Network-Based Fuzzy Group Forecasting

425

Step II: Single Predictors Fuzzification. Based on these different training sets, each neural network can produce some different predictors. Using the different predictors, we can obtain different prediction results. Because neural predictor is an unstable learning paradigm, we are required to integrate these different results produced by different predictors, as earlier noted in Section 1. For these different predictions, we consider them as a fuzzy number for further processing. For example, suppose that the original dataset is used to create k different training sets, i.e., TR1, TR2, …, TRk, via

f1i , f 2i , " f ki , for

the bagging algorithm, we can use them to generate k models, i.e.,

the ith neural network predictor. Accordingly, k different predictions, i.e.,

f1i ( x), f 2i ( x), " f ki ( x) , can be generated by the ith neural network predictor when out-of-sample forecasting. In order to make full use of all information provided by different predictions, without loss of generalization, we utilize the triangular fuzzy number to construct the fuzzy representation for different predicted results. That is, the smallest, average and largest of the k predictions are used as the left-, mediumand right membership degrees. That is, the smallest and largest values are seen as the optimistic and pessimistic prediction and the average value is considered to be the most likely prediction. Using this fuzzification method, one can make a fuzzy prediction judgment for each time series. More clearly, the triangular fuzzy number in this case can be represented as

([ ( k ], [max ( f

)] ( x ) )])

~ Z i = ( z i1 , z i 2 , z i 3 ) = min f 1i ( x ), f 2i ( x ), " , f ki ( x ) ,

[∑

k j =1

f ji ( x )

i 1

( x ), f 2i ( x ), " , f ki

(1)

In such a fuzzification processing way, the prediction problem is extended into a fuzzy group forecasting framework. Step III: Fuzzy Prediction Aggregation. Suppose that there is p different group members, i.e., p different neural network predictors, let Z~ = ψ ( Z~ 1 , Z~ 2 , " Z~ p ) be the aggregation of the p fuzzy prediction representations where ψ(•) is an aggregation function. Now how to determine the aggregation function or how to aggregate these fuzzy prediction representations to be a group prediction consensus is an important and critical problem under fuzzy group forecasting environment. Generally speaking, there are many aggregation operators and rules that can be used to aggregate fuzzy prediction representations. Usually, the fuzzy prediction representations of p group members will be aggregated using a commonly used linear additive procedure, i.e.,

~ ~ p Z = ∑i =1 wi Z i =

(∑

p i =1

wi z i1 , ∑i =1 wi z i 2 , ∑i =1 wi z i 3 p

p

)

(2)

where wi is the weight of the ith group forecasting member, i = 1, 2, …, p. The weights usually satisfy the following normalization condition:

∑i =1 wi = 1 p

(3)

The key to this aggregation procedure is how to determine the optimal weight wi of the ith fuzzy prediction representation under fuzzy group forecasting environment.

426

L. Yu, K.K. Lai, and S. Wang

Often the fuzzy representations of the predictions are largely dispersed and separated. In order to achieve the maximum similarity, the fuzzy prediction representations should move towards one another. This is the principle on the basis of which an aggregated fuzzy prediction representation is generated. Relied on this principle, a leastsquares aggregation optimization approach is proposed to integrate fuzzy prediction results produced by different prediction models. The generic idea of this proposed aggregation optimization approach is to minimize the sum of squared distance from one fuzzy prediction to another and ~ thus make ~ them maximum agreement. Particularly, the squared distance between Z i and Z j can be defined by

d ij2 =

( (w Z~ − w Z~ ) ) = ∑ 2

i

i

j

2

3 l =1

j

(w z

i il

− w j z jl )

(4)

2

Using this definition, we can construct the following optimization model, which minimizes the sum of the squared distances between all pairs of weights fuzzy prediction representations:

[

p p p p ⎧ 3 2 2 ⎪Minimize D = ∑ ∑ d ij = ∑ ∑ ∑l =1 (wi z il − w j z jl ) 1 1 , = = ≠ = 1 = 1 , ≠ i j j i i j j i ⎪ ⎪ p ( P) ⎨Subject to ∑ wi = 1 i =1 ⎪ ⎪ ⎪ wi ≥ 0, i = 1,2, " p ⎩

]

(5) (6) (7)

In order to solve the above optimal weights, the constraint (7) is first not considered. If the solution turns out to be nonnegative, then constraint (7) is satisfied automatically. Using Lagrange multiplier method, Equations (5) and (6) can construct the following Lagrangian function: p

L( w, λ ) = ∑

∑ [∑l =1 (wi zil − w j z jl ) ]− 2λ (∑i =1 wi = 1) p

3

p

2

(8)

i =1 j =1, j ≠ i

Differentiating (8) with wi, we can obtain p ∂L ⎡3 ⎤ = 2 ∑ ⎢∑ (wi z il − w j z jl )zil ⎥ − 2λ = 0 for each i = 1, 2, …, p. ∂wi j =1, j ≠ i ⎣ l =1 ⎦

(9)

Equation (9) can be simplified as

( p − 1)(∑3l =1 zil2 )wi − ∑ ⎡⎢∑ (zil z jl )⎤⎥w j − λ = 0 p

j =1, j ≠ i

3

⎣ l =1



for each i = 1, 2, …, p.

(10)

Setting W = (w1, w2, …, wp)T, I = (1, 1, …, 1)T and the T denotes transpose, 3 3 bij = ( p − 1) ∑l =1 zil2 , i = j = 1,2,", p , bij = − ∑ l =1 z il z jl , i , j = 1, 2, " , p ; j ≠ i and

(

)

(

)

Neural-Network-Based Fuzzy Group Forecasting

(

B = (bij ) p× p

⎡ 3 2 ⎢( p − 1) ∑l =1 z1l ⎢ 3 ⎢ − ∑ (z 2l z1l ) = ⎢ l =1 ⎢ " 3 ⎢ ⎢ − ∑ (z pl z1l ) ⎣ l =1

)

3

− ∑ (z1l z 2l ) l =1

( p − 1)(∑l3=1 z 22l ) "

− ∑ (z pl z 2l ) 3

l =1

3 ⎤ − ∑ (z1l z pl ) ⎥ l =1 ⎥ 3 − ∑ (z 2l z pl ) ⎥ " ⎥ l =1 ⎥ " " 3 2 ⎥ " ( p − 1) ∑l =1 z pl ⎥ ⎦

427

"

(

(11)

)

Using matrix form and above settings, Equations (10) and (6) can be rewritten as

BW − λI = 0

(12)

I TW = 1

(13)

Similarly, Equation (5) can be expressed in a matrix form as D = W BW . Because D is a squared distance which is usually larger than zero, B should be positive definite and invertible. Using Equations (12) and (13) together, we can obtain T

λ* = 1 (I T B −1 I )

(

W * = B −1 I

) (I

T

B −1 I

(14)

)

(15)

Since B is a positive definite matrix, all its principal minors will be strictly positive and thus B is a nonsingular M-matrix [10]. According to the properties of M-matrices, we know B-1 is nonnegative. Therefore W * ≥ 0, which implies that the constraint in Equation (7) is satisfied. Step IV: Aggregated Prediction Defuzzification. After completing aggregation, a fuzzy group consensus can be obtained by Equation (2). To obtain crisp value of credit score, we use a defuzzification procedure to obtain the crisp value for decisionmaking purpose. According to Bortolan and Degani [11], the defuzzified value of a triangular fuzzy number

~ Z = ( z1 , z 2 , z 3 ) can be determined by its centroid, which is

computed by z3 ⎛ ⎛ x − z1 ⎞ z −x ⎞ ⎟dx ⎜⎜ x ⋅ ⎟⎟dx + ∫ ⎜⎜ x ⋅ 3 z1 z2 z 2 − z1 ⎠ z 3 − z 2 ⎟⎠ ∫ (z + z + z3 ) z1 ⎝ ⎝ = 1 2 z = z3 = (16) z2 ⎛ x − z ⎞ z3 ⎛ z − x ⎞ 3 3 1 ∫z1 μ ~z ( x)dx ⎜ ⎟ ⎜ ⎟ dx dx + ∫z1 ⎜⎝ z 2 − z1 ⎟⎠ ∫z2 ⎜⎝ z3 − z2 ⎟⎠ z3

xμ ~z ( x)dx



z2

In this way, a final group forecasting consensus is computed with the above processes. For illustration and verification purposes, an illustrated numerical example and three typical foreign exchange rates are conducted.

428

L. Yu, K.K. Lai, and S. Wang

3 Experiments In this section, we first present an illustrative numerical example to explain the implementation process of the proposed fuzzy group forecasting model using US dollar against Chinese Renminbi (USD/RMB) exchange rate series. Then three typical foreign exchange rates, US dollar against each of the three currencies — British pounds (GBP), euros (EUR) and Japanese yen (JPY), are used for testing. All four exchange data are obtained from Pacific Exchange Rates Services (http://fx.sauder.ubc.ca/), provided by Professor Werner Antweiler, University of British Columbia, Vancouver, Canada. 3.1 An Illustrative Numerical Example Assume that there is USD/RMB series covered from January 1, 2006 to November 30, 2006, one would like to predict future USD/RMB exchange rate, e.g., December 1, 2006. For simplification, we first apply three standard FNN models with different topological structures to conduct this example. For example, we use three different numbers of hidden neurons to generate three different FNN models. In this example, we try to utilize five different models for prediction. For this purpose, the bagging sampling algorithm [9] is then used to create five different training sets. For each FNN model, five different training sets are used and five different prediction results are presented. With the above assumptions, three FNN models can produce 15 different prediction results, each for five predictions, as shown below. FNN (5-09-1) = (7.8211, 7.8321, 7.8451, 7.8122, 7.8247) FNN (5-12-1) = (7.8309, 7.8292, 7.8302, 7.8385, 7.8278) FNN (5-15-1) = (7.8082, 7.8199, 7.8208, 7.8352, 7.8393) where the numbers in parentheses indicate the topological structures of standard FNN model, for example, (5-12-1) represents five input nodes, twelve hidden nodes and one output node. Using Equation (1), the predictions of the three FNN models can be fuzzified into three triangular fuzzy numbers as fuzzy prediction representations, i.e.,

~ Z FNN1 = ( z11 , z12 , z13 ) = (7.8122,7.8270,7.8451) ~ Z FNN 2 = ( z 21 , z22 , z23 ) = (7.8278,7.8313,7.8385) ~ Z FNN 3 = ( z31 , z32 , z33 ) = (7.8082,7.8247,7.8393) The subsequent task is to aggregate the three fuzzy prediction representations into a group prediction consensus. Using the above optimization method, we can obtain the following results:

⎡ 367.6760 − 183.9417 B = ⎢⎢− 183.9417 368.0916 ⎢⎣ − 183.7432 − 183.8470 ⎡ 2.1439 2.1427 −1 3 B = 10 × ⎢⎢2.1427 2.1415 ⎢⎣ 2.1450 2.1438

− 183.7432⎤ − 183.8470⎥⎥ , 367.2971 ⎥⎦ 2.1450⎤ 2.1438⎥⎥ , 2.1461⎥⎦

Neural-Network-Based Fuzzy Group Forecasting

429

3 ~ ~ W *T = (0.3333, 0.3332; 0.3335) , Z * = ¦i=1 w*Zi = (7.8161, 7.8277, 7.8410)

The final step is to defuzzify the aggregated fuzzy prediction value into a crisp prediction value. Using Equation (14), the defuzzified value of the final group prediction consensus is calculated by

z = (7.8161 + 7.8277 + 7.8410) 3 = 7.8282 According to the data source, the actual value of USD/RMB in December 1, 2006 is 7.8283. By comparison, our fuzzy group prediction is rather promising. For further verification, three typical foreign exchange rates are tested. 3.2 Three Foreign Exchange Rates Prediction Experiments In this subsection, three typical foreign exchange rates are used to test the effectiveness of the proposed neural network-based fuzzy group forecasting model. The data used here are monthly and they consist of the USD/GBP, USD/EUR and USD/JPY. We take monthly data from January 1971 to December 2000 as in-sample (training periods) data sets (360 observations including 60 samples for cross-validations). We also take the data from January 2001 to November 2006 as out-of-sample (testing periods) data sets (71 observations), which is used to evaluate the good or bad performance of prediction based on some evaluation measurement. For evaluation, two typical indicators, normalized mean squared error (NMSE) [1] and directional statistics (Dstat) [1] are used. In addition, for comparison purpose, linear regression (LinR), logit regression (LogR) and single FNN model are used here. Particularly, we use ten different FNN models with different structures to construct the fuzzy group forecasting model. In addition, the bagging algorithm [9] is used to create ten different training sets. Accordingly, the results obtained are reported in Table 1. Table 1. The NMSE and Dstat comparisons for different models Models Single LinR Model Single LogR Model Single FNN Model Fuzzy Group Forecasting

GBP NMSE Dstat (%) 0.0811 57.75 0.0792 63.38 0.0767 69.01 0.0083 84.51

EUR NMSE Dstat (%) 0.0892 59.15 0.0669 67.61 0.0663 70.42 0.0112 83.10

NMSE 0.2346 0.1433 0.1782 0.0458

JPY Dstat (%) 56.33 70.42 71.83 81.69

From Table 1, a clear comparison of various methods for the three currencies is given via NMSE and Dstat. Generally speaking, the results obtained from the two tables also indicate that the prediction performance of the proposed neural network based fuzzy group forecasting model is better than those of the single neural network model, linear regression and logit regression forecasting models for the three main currencies. The main reasons are that (1) aggregating multiple predictions into a group consensus can definitely improve the performance, as Yu et al. [1] revealed; (2) fuzzification of the predictions may generalize the model by processing some uncertainties of forecasting; and (3) as an “universal approximator”, neural network might also make a contribution for the performance improvement.

430

L. Yu, K.K. Lai, and S. Wang

4 Conclusions In this study, a new neural network based fuzzy group forecasting model is proposed for foreign exchange rates prediction. In terms of the empirical results, we can find that across different models for the test cases of three main currencies — British pounds (GBP), euros (EUR) and Japanese yen (JPY) — on the basis of different evaluation criteria, our proposed neural network based fuzzy group forecasting method performs the best. In the proposed neural network based fuzzy group forecasting cases, the NMSE is the lowest and the Dstat is the highest, indicating that the proposed neural network based fuzzy group forecasting model can be used as a promising tool for foreign exchange rates prediction.

Acknowledgements This work is supported by the grants from the National Natural Science Foundation of China (NSFC No. 70221001, 70601029), the Chinese Academy of Sciences (CAS No. 3547600), the Academy of Mathematics and Systems Sciences (AMSS No. 3543500) of CAS, and the Strategic Research Grant of City University of Hong Kong (SRG No. 7001677, 7001806).

References 1. Yu, L., Wang, S.Y., Lai, K.K.: A Novel Nonlinear Ensemble Forecasting Model Incorporating GLAR and ANN for Foreign Exchange Rates. Computers & Operations Research 32 (2005) 2523-2541 2. De Matos G.: Neural Networks for Forecasting Exchange Rate. M. Sc. Thesis. The University of Manitoba, Canada (1994) 3. Kuan, C.M., Liu, T.: Forecasting Exchange Rates Using Feedforward and Recurrent Neural Networks. Journal of Applied Econometrics 10 (1995) 347-364 4. Tenti, P.: Forecasting Foreign Exchange Rates Using Recurrent Neural Networks. Applied Artificial Intelligence 10 (1996) 567-581 5. Hsu, W., Hsu, L.S., Tenorio, M.F.: A Neural Network Procedure for Selecting Predictive Indicators in Currency Trading. In Refenes A.N. (Ed.): Neural Networks in the Capital Markets. New York: John Wiley and Sons (1995) 245-257 6. Leung, M.T., Chen, A.S., Daouk, H.: Forecasting Exchange Rates Using General Regression Neural Networks. Computers & Operations Research 27 (2000) 1093-1110 7. Chen, A.S., Leung, M.T.: Regression Neural Network for Error Correction in Foreign Exchange Rate Forecasting and Trading. Computers & Operations Research 31 (2004) 1049-1068 8. Yu, L., Wang, S.Y., Lai, K.K. Adaptive Smoothing Neural Networks in Foreign Exchange Rate Forecasting. Lecture Notes in Computer Science 3516 (2005) 523-530 9. Breiman, L.: Bagging Predictors. Machine Learning 26 (1996) 123-140 10. Berman, A., Plemmons, R.J.: Nonnegative Matrices in the Mathematical Sciences. Academic, New York (1979) 11. Bortolan, G., Degani, R.: A Review of Some Methods for Ranking Fuzzy Subsets. Fuzzy Sets and Systems 15 (1985) 1-19

Credit Risk Evaluation Using Support Vector Machine with Mixture of Kernel Liwei Wei1,2, Jianping Li1, and Zhenyu Chen1,2 1

Institute of Policy & Management, Chinese Academy of Sciences, Beijing 100080, China 2 Graduate University of Chinese Academy of Sciences, Beijing 100039, China {lwwei, ljp, zychen}@casipm.ac.cn

Abstract. Recent studies have revealed that emerging modern machine learning techniques are advantageous to statistical models for credit risk evaluation, such as SVM. In this study, we discuss the applications of the support vector machine with mixture of kernel to design a credit evaluation system, which can discriminate good creditors from bad ones. Differing from the standard SVM, the SVM-MK uses the 1-norm based object function and adopts the convex combinations of single feature basic kernels. Only a linear programming problem needs to be resolved and it greatly reduces the computational costs. More important, it is a transparent model and the optimal feature subset can be obtained automatically. A real life credit dataset from a US commercial bank is used to demonstrate the good performance of the SVM- MK. Keywords: Credit risk evaluation SVM-MK Feature selection Classification model.

1 Introduction Undoubtedly credit risk evaluation is an important field in the financial risk management. Extant evidence shows that in the past two decades bankruptcies and defaults have occurred at a higher rate than any other time. Thus, it’s a crucial ability to accurately assess the existing risk and discriminate good applicants from bad ones for financial institutions, especially for any credit-granting institution, such as commercial banks and certain retailers. Due to this situation, many credit classification models have been developed to predict default accurately and some interesting results have been obtained. These credit classification models apply a classification technique on similar data of previous customers to estimate the corresponding risk rate so that the customers can be classified as normal or default. Some researchers used the statistical models, such as Linear discriminate analysis [1], logistic analysis [2] and probit regression [3], in the credit risk evaluation. These models have been criticized for lack of classification precision because the covariance matrices of the good and bad credit classes are not likely to be equal. Recently, with the emergence of Decision tree [4] and Neural network [5], artificial intelligent (AI) techniques are wildly applied to credit scoring tasks. They have Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 431–438, 2007. © Springer-Verlag Berlin Heidelberg 2007

432

L. Wei, J. Li, and Z. Chen

obtained promising results and outperformed the traditional statistics. But these methods often suffer local minima and over fitting problems. Support vector machine (SVM) is first proposed by Vapnik [6]. Now it has been proved to be a powerful and promising data classification and function estimation tool. Reference [7], [8] and [9] applied SVM to credit analysis. They have obtained some valuable results. But SVM is sensitive to outliers and noises in the training sample and has limited interpretability due to its kernel theory. Another problem is that SVM has a high computational complexity because of the solving of large scale quadratic programming in parameter iterative learning procedure. Recently, how to learn the kernel from data draws many researchers’ attention. The reference [10] draws the conclusion that the optimal kernel can always be obtained as a convex combinations of many finitely basic kernels. And some formulations [11] [12] have been proposed to perform the optimization in manner of convex combinations of basic kernels. Motivated by above questions and ideas, we propose a new method named support vector machines with mixture of kernel (SVM-MK) to evaluate the credit risk. In this method the kernel is a convex combination of many finitely basic kernels. Each basic kernel has a kernel coefficient and is provided with a single feature. The 1-norm is utilized in SVM-MK. As a result, its objective function turns into a linear programming parameter iterative learning procedure and greatly reduces the computational complexity. Furthermore, we can select the optimal feature subset automatically and get an interpretable model. The rest of this paper is organized as follows: section 2 gives a brief outline of SVM-MK. To evaluate the performance of SVM-MK for the credit risk assessment, we use a real life credit card data from a major US commercial bank in this test in section 3. Finally, section 4 draws the conclusion and gives an outlook of possible future research areas.

2 Support Vector Machine with Mixture of Kernel Considering a training data set G =

{(xG y )}

n

i,

i

i =1

,

G xi ∈ R m is the i th input pattern and

yi is its corresponding observed result yi ∈ {+ 1,− 1} . In credit risk evaluation model, xi denotes the attributes of applicants or creditors, yi is the observed result of timely repayment. If the applicant defaults a loan, yi =1, else yi =-1. The optimal separating hyper-plane is found by solving the following regularized optimization problem [6]: n 1 2 K min J (ω , ξ ) = ω + c ∑ i =1ξ i 2

(

)

K K ⎧ yi ω T φ (xi ) + b ≥ 1 − ξ i s.t.⎨ ξi ≥ 0 ⎩

i = 1," n

(1)

(2)

Credit Risk Evaluation Using Support Vector Machine with Mixture of Kernel

433

where c is a constant denoting a trade-off between the margin and the sum of total K errors. φ ( x ) is a nonlinear function that maps the input space into a higher dimensional feature space. The margin between the two parts is 2

ω

.

The quadratic optimization problem can be solved by transforming (1) and (2) into the saddle point of the Lagrange dual function:

1 n K K ⎫ ⎧ n max⎨∑i=1αi − ∑i, j=1αiα j yi y j k (xi , x j )⎬ 2 ⎩ ⎭ ⎧⎪ ∑n yiα i = 0 i =1 s.t. ⎨ ⎪⎩0 ≤ α i ≤ c, i = 1,", n

(K K )

K

(3)

(4)

(K )

where k xi , x j = φ ( xi ) • φ x j is called the kernel function, α i are the Lagrange multipliers. In practice, a simple and efficient method is that the kernel function being illustrated as the convex of combinations of the basic kernel: n K K k (xi , x j ) = ∑i =1 β d k (xi , d , x j , d )

where

(5)

K xi , d denotes the d th component of the input vector xi .

Substituting Equation (5) into Equation (3), and multiplying Equation (3) and (4)

by β d , suppose γ i , d

= α i ⋅ β d , then the Lagrange dual problem change into:

1 n,m ⎧ ⎫ max⎨∑i,d γ i,d − ∑i , j ,d =1γ i,dγ j ,d yi y j k (xi,d , x j ,d )⎬ 2 ⎩ ⎭ ⎧ ∑i,d yiγ i ,d = 0 m ⎪⎪ s.t. ⎨0 ≤ ∑d =1 γ i ,d ≤ c, i = 1, ", n ⎪ γ ≥ 0, d = 1,", m i ,d ⎪⎩

(6)

(7)

The new coefficient γ i , d replaces the Lagrange coefficient α i . The number of coef-

ficient that needs to be optimized is increased from n to n × m . It increases the computational cost especially when the number of the attributes in the dataset is large. The linear programming implementation of SVM is a promising approach to reduce the computational cost of SVM and attracts some scholars’ attention. Based on above idea, a 1-norm based linear programming is proposed: n K K min J (γ , ξ ) = ∑i , d γ i , d + λ ∑i =1ξi

(8)

434

L. Wei, J. Li, and Z. Chen

⎧ yi ⎪ s.t. ⎨ ⎪ ⎩

(∑

j,d

)

γ j , d y j k (xi , d , x j , d ) + b ≥ 1 − ξi ξi ≥ 0, i = 1,", n γ i , d ≥ 0, d = 1,", m

In equation (8), the regularized parameter

γ i, d .

(9)

λ

controls the sparse of the coefficient

ui

(10)

The dual of this linear programming is:

max



n i =1

⎧∑n ui yi y j k (xi ,d , x j ,d ) ≤ 1, j = 1,", n.d = 1,", m ⎪⎪ i=1 n s.t. ⎨ ∑i=1ui yi = 0 ⎪ 0 ≤ ui ≤ λ, i = 1,", n ⎩⎪

(11)

The choice of kernel function includes the linear kernel, polynomial kernel or RBF kernel. Thus, the SVM-MK classifier can be represented as:

(

K f (x ) = sign ∑ j , d γ j , d y j k (xi , d , x j , d ) + b

)

(12)

It can be found that above linear programming formulation and its dual description is equivalent to that of the approach called “mixture of kernel” [12]. So the new coefficient γ i , d is called the mixture coefficient. Thus this approach is named “support vector machine with mixture of kernel” (SVM-MK). Comparing with the standard SVM that obtains the solution of α i by solving a quadratic programming problem, the SVM-MK can obtain the value of mixture coefficient γ i, d by solving a linear programming. So the SVM-MK model greatly reduces the computational complexity. It is more important that the sparse coefficients γ i , d give us more choices to extract the satisfied features in the whole space spanned by all the attributes.

3 Experiment Analysis In this section, a real-world credit dataset is used to test the performance of SVMMK. The dataset is from a major US commercial bank. It includes detailed information of 5000 applicants, and two classes are defined: good and bad creditor. Each record consists of 65 variables, such as payment history, transaction, opened account etc. This dataset includes 5000 applicants, in which the number of bad accounts is 815 and the others are good accounts. Thus the dataset is greatly imbalance. So we preprocess the data by means of sampling method and making use of 5-fold crossvalidation to guarantee valid results. In addition, three evaluation criteria measure the efficiency of classification:

Credit Risk Evaluation Using Support Vector Machine with Mixture of Kernel

Type Ι error =

number of observed good but classified as bad number of observed good

number of observed bad but classified as good number of observed bad number of false classifica tion Total error = the number of evaluation sample

Type Π error =

435

(13) (14) (15)

3.1 Experiment Result Our implementation was carried out on the Matlab6.5. Firstly, the data is normalized. In this method the Gaussian kernel is used, and the kernel parameter needs to be chosen. Thus the method has two parameters to be prepared set: the kernel parameter

σ 2 and the regularized parameter λ . The Type I error (e1), Type II error (e2), Total 2 error (e), number of selected features and the best pairs of ( σ , λ ) for each fold using SVM-MK approach are shown in table 1. For this method, its average Type I error is 24.25%, average Type II error is 17.24%, average Total error is 23.22% and average number of selected features is 18. Table 1. Experimental results for each fold using SVM-MK

Fold #

e1 (%)

e2 (%)

e (%) Optimized σ

1 2 3 4 5 Average

20.2 27.75 24.54 20.27 28.49 24.25

15.7 15.39 14.29 13.68 27.14 17.24

18.9 26.3 23.1 19.5 28.3 23.22

2

Optimized λ selected features

4 5.45 5 3 1.2

5 3 1 5 8

19 13 8 25 26 18

Then, we take fold #2 as an example. The experimental results of SVM-MK using various parameters are illustrated in table 2. The performance is evaluated using five measures: number of the selected features (NSF), selected specific features, Type I error (e1), Type II error (e2) and Total error (e). From table 2, we can draw the following conclusions: (1) The values of parameters to get the best prediction results are λ = 3 and σ = 5.45, the number of selected features is 13, the bad ones’ error is 15.39% and the total error is 26.3%, they are the lowest errors compared to the other situations. And almost eight of ten default creditors can be discriminated from the good ones using this model at the expense of denying a small number of the non-default creditors. (2) With the increasing of λ , the number of selected features is gradually increasing. A bigger parameter λ makes the value of coefficient matrix become decreasing so that the constraints of the dual linear programming can be satisfied. As a 2

436

L. Wei, J. Li, and Z. Chen

Table 2. Classification error and selected features of various parameters in SVM-MK (fold #2)

λ

1

3

5

10

15

20

3 13 20 32 41 41 54 3,8 3,8,9,10,11 3, 4, 8,9,10,12 3,4,5,6,8,10,12 3,4,5,6,8,10,12,14,15 55 10,11 14,20,24,31 14,15,16,17,20 14,15,16,17,19,20 16,17,19,20,23,24,25 61 14,20,31 38,40,42,47 23,24,25,28,31 23,24,25,28,31,32,33 28,31,32,33,34,37,38

NSF Selected specific features

38,47,52 51,52,53,54 53,55,61 55,60,61

e1 e2 σ 2 =2 e e1 5 e2 e e1 5.45 e2 e e1 10 e2 e e1 11.4 e2 e e1 15 e2 e

100 0 87.3 17.8 32.7 18.1 0 100 13.9 0 100 13.9 0 100 13.9 0 100 13.9

92 3. 37 63. 5 38.2 8.6 34.1 27.75 15.39 26.3 98.88 0 86 99.88 0 87.3 100 0 87.3

32,34,38,39,40 34,37,38,39,40,41,42 42,45,47,51,52 43,45,47,48,50,51,52 53,54,55,58 53,54,55,58,59,61,63

14. 52 48.2 19. 2 22 29 23 21 32.4 24.5 19. 4 35. 3 21. 6 26. 62 16. 24 25. 4 32.1 10. 8 29.1

22. 55 41. 45 26.2 4. 2 83.89 17 11.7 100 11.7 2 .3 93.16 11.8 1. 1 94. 87 11. 9 1. 15 92.83 13.9

35. 4 21. 55 32. 45 14.04 29. 9 15. 9 11.7 100 11.7 88 3. 1 77. 8 92 7. 53 79. 94 94. 36 6. 3 81. 6

39,40,41,42,43,45 47,48,50,51,52,53 54,55,58,59,61,63

0.11 98. 29 11. 6 0.68 93.16 11.5 11.7 100 11.7 3. 2 87. 9 11. 9 12.9 29.9 14.9 5. 21 60. 68 11. 7

result, the sparse of γ i , d becomes bad in the primal LP problem. That is to say, a bigger parameter λ results in selecting a good many of features. But the parame-

ter σ has no effect on the feature selection. When parameter λ is equal to 3, only 13 attributes is selected and the best classification results are obtained. It is shown that a reasonable feature extraction can improve the performance of the learning algorithm greatly. These selected specific attributes also help the loaner draw a conclusion as to the nature of credit risk existing in the information of the creditors easily. This implies λ plays a key role in the feature selection. So we must pay more attention to select the appropriate values for parameter λ . 2

(3) When the value of parameter σ matches the certain values of parameter λ , we can get promising classification results. In general, there is a trade off between Type I and II error in which lower Type II error usually comes at the expense of higher Type I error. 2

3.2 Comparison of Results of Different Credit Risk Evaluation Models The credit dataset that we used has imbalanced class distribution. Thus, there is nonuniform misclassifying cost at the same time. The cost of misclassifying a sample in the bad class is much higher than that in the good class. So it is quite important that the prior probabilities and the misclassification costs be taken into account in order to

Credit Risk Evaluation Using Support Vector Machine with Mixture of Kernel

437

obtain a credit evaluation model with the minimum expected misclassification [13]. When there are only two different populations, the cost function in computing the expected misclassification is considered as follows:

cos t = c21 × π 1 × p(21) + c12 × π 2 × p(1 2)

(16)

c21 and c12 are the corresponding misclassification costs of Type I and Type II error, π 1 and π 2 are prior probabilities of good and bad credit applicants,

where

p(21) and p(1 2) measure the probability of making Type I error and Type II error. In

this study, p(21) and p(1 2) are respectively equal to Type I and Type II error. The

misclassification ratio associated with Type I and Type II error are respectively 1 and 5 [13]. In order to further evaluate the effectiveness of the proposed SVM-MK credit evaluation model, the classification results are compared with some other methods using the same dataset, such as multiple criteria linear programming (MCLP), multiple criteria non-linear programming (MCNP), decision trees and neural network. The results of the four models quoted from the reference [14]. Table 3 summarizes the Type I, Type II and Total error of the five models and the corresponding expected misclassification costs (EMC). Table 3. Errors and the expected misclassification costs of the five models

Model

e1 (%)

e2 (%)

e (%)

EMC

MCLP 24.49 59.39 30.18 0.51736 MCNP 49.03 17.18 43.84 0.52717 Decision Tree 47.91 17.3 42.92 0.51769 Neural Network 32.76 21.6 30.94 0.40284 SVM-MK 24.25 17.24 23.22 0.30445 The priors of good and bad applicants are set to as 0.9 and 0.1 using the ratio of good and bad credit customers in the empirical dataset.

From table 3, we can conclude that the SVM-MK model has better credit scoring capability in term of the overall error, the Type I error about the good class, the Type II error about the bad class and the expected misclassification cost criterion in comparison with former four models. Consequently, the proposed SVM-MK model can provide efficient alternatives in conducting credit evaluating tasks.

4 Conclusions This paper presents a novel SVM-MK credit risk evaluation model. By using the 1-norm and a convex combination of basic kernels, the object function which is a quadratic programming problem in the standard SVM becomes a linear programming parameter iterative learning problem so that greatly reducing the computational costs. In practice, it is not difficult to adjust kernel parameter and regularized parameter to obtain a satisfied classification result. Through the practical data experiment, we have obtained good classification results and meanwhile demonstrated that SVM-MK model is of good performance in credit scoring system. And we get only a few

438

L. Wei, J. Li, and Z. Chen

valuable attributes that can interpret a correlation between the credit and the customers’ information. So the extractive features can help the loaner make correct decisions. Thus the SVM-MK is a transparent model, and it provides efficient alternatives in conducting credit scoring tasks. Future studies will aim at finding the law existing in the parameters’ setting. Generalizing the rules by the features that have been selected is another further work.

Acknowledgements This research has been partially supported by a grant from National Natural Science Foundation of China (#70531040), and 973 Project (#2004CB720103), Ministry of Science and Technology, China.

References: 1. G. Lee, T. K. Sung, N. Chang: Dynamics of modeling in data mining: Interpretive approach to bankruptcy prediction. Journal of Management Information Systems, 16(1999), 63-85 2. J. C. Wiginton: A note on the comparison of logit and discriminate models of consumer credit behavior. Journal of Financial Quantitative Analysis 15(1980), 757-770 3. B.J. Grablowsky, W. K. Talley: Probit and discriminant functions for classifying credit applicants: A comparison. Journal of Economic Business. Vol.33(1981), 254-261 4. T. Lee, C. Chiu, Y. Chou, C. Lu: Mining the customer credit using classification and regression tree and multivariate adaptive regression splines. Computational Statistics and Data Analysis, Vol.50, 2006(4), 1113-1130 5. Yueh-Min Huang, Chun-Min Hung, Hewijin Christine Jiau: Evaluation of neural networks and data mining methods on a credit assessment task for class imbalance problem. Nonlinear Analysis: Real World Applications 7(2006), 720-747 6. V. Vapnik: The nature of statistic learning theory. Springer, New York(1995) 7. T. Van Gestel, B. Baesens, J. Garcia, and P. Van Dijcke: A support vector machine approach to credit scoring. Bank en Financiewezen 2 (2003), 73-82 8. Y. Wang, S. Wang, K. Lai, A new fuzzy support vector machine to evaluate credit risk. IEEE Transactions on Fuzzy Systems, Vol.13, 2005(6), 820-831 9. Wun-Hwa Chen, Jin-Ying Shih: A study of Taiwan’s issuer credit rating systems using support vector machines. Expert Systems with Applications 30(2006), 427-435 10. A. Ch. Micchelli, M. Pontil: Learning the kernel function via regularization. Journal of Machine Learning Research, 6(2005), 1099-1125 11. G. R.G. Lanckrient, N. Cristianini, P. Bartlett, L. El Ghaoui, M.I. Jordan: Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5(2004), 27-72 12. F.R. Bach, G. R.G. Lanckrient, M.I. Jordan: Multiple kernel learning, conic duality and the SMO algorithm. Twenty First International Conference on Machine Learning, (2004), 41-48 13. D. West: Neural network credit scoring models. Computers and Operations Research, 27(2000), 1131-1152 14. J. He, Y. Shi, W. X. Xu: Classifications of credit cardholder behavior by using multiple criteria non-linear programming. In Y. Shi, W. Xu, Z. Chen (Eds.) CASDMKM 2004, LNAI 3327, Springer-Verlag Berlin Heidelberg, (2004), 154-163

Neuro-discriminate Model for the Forecasting of Changes of Companies Financial Standings on the Basis of Self-organizing Maps Egidijus Merkevičius1, Gintautas Garšva1,2,3, and Rimvydas Simutis1 1

Department of Informatics, Kaunas Faculty of Humanities, Vilnius University Muitinės st. 8, LT- 44280 Kaunas, Lithuania 2 Department of Information Systems, Kaunas University of Technology, Studentu 50, LT-51368 Kaunas, Lithuania 3 Department of Informatics, Engineering and Biomechanics, Lithuanian Academy of Physical Education, Sporto 6, LT-44221 Kaunas, Lithuania {egidijus.merkevicius, gintautas.garsva, rimvydas.simutis}@vukhf.lt

Abstract. This article presents the way how creditor can predict the trends of debtors financial standing. We propose the model for forecasting changes of financial standings. Model is based on the Self-organizing maps as a tool for prediction, grouping and visualization of large amount of data. Inputs for training of SOM are financial ratios calculated according any discriminate bankruptcy model. Supervised neural network lets automatically increase accuracy of performance via changing of weights of ratios. Keywords: self-organizing maps, Z-score, bankruptcy, prediction, bankruptcy class, multivariate discriminate model, Altman, Zmijewski, feed-forward neural network, model.

1 Introduction Bankruptcy is a process which results reorganization of the company in order to repay debts and fulfill other liabilities. Close monitoring of financial standing of the company is very important in order to prevent possible bankruptcy. Model forecasting changes in financial standing is presented in this article. The fundamental bankruptcy models (Altman, Zmijewski etc.), supervised and unsupervised artificial neural networks are used as the base in this model. The concept of the model is universal – any of discriminate bankruptcy model can be used in order to describe the company by one meaning. Related works are presented in the first part of the article. The methodology of the model is described in the second part. Third part includes description and results of testing of the model with the actual financial data. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 439–446, 2007. © Springer-Verlag Berlin Heidelberg 2007

440

E. Merkevičius, G. Garšva, and R. Simutis

2 Related Work Bankruptcy is described as inability or impairment of ability of an individual or organization to pay their creditors, in other words – default. One of the most important tasks of the creditors is to manage the credit risk making forecast of changes in financial standing. The early history of research attempts to classify and predict bankruptcy is well documented in [1],[7]. Altman [2], Zmijewski, [16], Ohlson [10], Shumway [14] and other authors are the first and fundamental creators of bankruptcy models. The key-point of these models is to determine most important indicators (ratios) and their weights in discriminate or logistic function. A detailed description of the SOM method is presented in [9]. During the past 15 years investigations in area of SOM applications to financial analysis have been done. Doebeck described and analyzed most cases in [5]. Martindel-Prio and Serrano-Cinca generated SOM’s of Spanish banks and subdivided those banks into two large groups, the configuration of banks allowed establishing root causes of the banking crisis [11]. Based on Kiviluoto’s study [8], through visual exploration one can see the distribution of important indicators (i.e. bankruptcy) on the map. The following authors have estimated only historical and current financial data of the companies and afterwards they have interpreted it for forecasting bankrupt of those companies. In this paper, we suggest generating SOM as one that could be applied for forecasting of bankruptcy classes for other than trained companies.

3 Methodology In this section we present Neuro-discriminate model for forecasting of changes of companies financial standings on the basis of Self-organizing maps (further - Model). The Model includes various methods - multivariate discriminate analysis, selforganizing maps, feed-forward supervised neural network, combination of which makes original model of forecasting. These methods [2], [16], [9], [1] used in the Model are original with no major adjustments, so they are not presented. The main concept of the Model is presented in figure 1. Description of the Model concept: 1. 2.

3.

On the basis of bankruptcy models changes of companies financial standing are determined (0 – negative changes, 1 – positive changes); The components of discriminate bankruptcy model are used for training of unsupervised neural network and generating SOM. Testing of accuracy of the SOM is executed via calculation of corresponding nodes between training and testing data. The accuracy of forecasting is improved via changing of weights. Feedforward neural network is used in the Model as a tool for changing of weights where inputs are test data and targets are outputs of trained SOM.

Neuro-Discriminate Model for the Forecasting of Changes

441

Financial data data Multivariate discriminate bankruptcy model

corrected weights

FF ANN inputs / outputs

x

x

x

y

SOM

w

x

w w w w

. .. x

weights

Fig. 1. The concept of the Model

The detailed algorithm of the Model is visualized in figure 2.

M ySQL DB

Local companies financial d atabase

Weights of components of discriminate bankruptcy model

Financial ratios: components of discriminate bankruptcy model - EDGAR

inputs

trained SOM

EDGAR label nodes of trained SOM calculate corresponding nodes LT label nodes of trained SOM

Financial ratios: components of discriminate bankruptcy model - LT

LT Calculate Z-scores/ classification

EDGAR labeled nodes of trained SOM/ targets

accuracy is acceptable

Feed-forward neural network no

yes

ED GA R PRO Financial d atabas e

EDGAR Calculate Z-scores/ classification

inputs x x x

w w w w

Status Report y

w

x

... x

Fig. 2. Algorithm of proposed Model and methodology of weights adoption

The main steps of the algorithm are as follows: 1.

On the basis of financial statements of the companies, taken from EDGAR PRO Online database (further related data is named EDGAR)[6], the

442

E. Merkevičius, G. Garšva, and R. Simutis

2.

3. 4. 5.

6.

7. 8.

financial variables (ratios) and the bankruptcy scores (further named Z-scores) are calculated and converted to bankruptcy classes based on Z-scores changes during two years (0-negative changes, 1-positive changes). The same calculations are made with data, taken from other financial sources, for example, financial statements of Lithuanian companies (further related data is named LT), assigning bankruptcy classes (0-negative changes, 1-positive changes) in the same way as described above. Data preprocessing is executed. The process consists of normalizing data set, selecting map structure and the topology, setting other options like data filter, delay etc. The SOM is generated on the basis of EDGAR data. The Inputs of SOM are the Z-Score variables and the labels are bankruptcy classes. The generated SOM is labeled with the bankruptcy classes of LT companies. Labeled units of the trained SOM are compared with the same units labeled with LT bankruptcy classes. Corresponding units are calculated. If the number of corresponding EDGAR and LT labels which are located on the same SOM map unit number (accuracy of prediction) is acceptable, then status report is generated, otherwise changing of weights with the feedforward neural network (further - FF ANN) starts. The attempt to increase corresponding labels is made in order to create such a map structure in which the amount of unit numbers has the biggest corresponding label number. For this goal we have used FF ANN, where inputs are the ratios of LT calculated as bankruptcy discriminate model with no weights. The targets of the ANN are units of SOM labels which belong to correspondent LT data. The initial weights of ANN are the original weights of bankruptcy discriminate model. As a result we have changed weights. The weights of original discriminate bankruptcy model are updated with changed weights. Next iteration of presented algorithm starts (1-7 steps). When the performance of the prediction doesn’t rapidly change the algorithm has stopped.

The results of this algorithm are as follows: 1. 2. 3. 4.

New SOM with a certain prediction percentage and good visualization of large amount of data; Original way how to use different financial database in the same bankruptcy model. New multivariate discriminator model that is based on the original discriminate bankruptcy model with corrected weight variables. Automatic tool to update original weights according to accuracy of prediction – FF ANN.

Result of prediction is the most important information for creditor showing the trend of a company (positive or negative changes of financial standing of company).

Neuro-Discriminate Model for the Forecasting of Changes

443

4 Results of Testing In this section we have presented the results of Model testing. The testing of proposed Model has been executed using two real financial datasets: companies from NASDAQ list, (further - TRAINDATA) loaded from EDGAR PRO Online database and a dataset of Lithuanian company’s financial statements (TESTDATA) presented by one of the Lithuanian banks. The basis for generating the SOM is TRAINDATA. Calculated bankruptcy ratios are based on the original discriminate bankruptcy model by Zmijewski [16]. The ratios are used as inputs and changes of Z-Scores during two years are used as labels for the identification of units in the SOM. Table 1. Characteristics of financial datasets

Dataset Taken from

TRAINDATA TESTDATA EDGAR PRO Online Database Database of Lithuanian (free trial) bank. Period of financial data 7 periods consecutively 2003-2004 Amount of companies 9364 776 Count of records 56184 776 Count of records after 46353 767 elimination of missing data Number of inputs 6 (attributes) Risk classes of If the Z-score of second period is less than the Z-score of bankruptcy first period then the risk class is determined as 0, otherwise – 1. The SOM was trained using the SOM Toolbox for Matlab package [15]. Changing of weights with FF ANN has been executed using NNSISID Toolbox for Matlab [13]. The testing process was as follows: 1. 2. 3. 4. 5. 6.

Getting data from financial sources and putting them to the MySQL database. Filtering missing data. Calculating Z-scores; assigning of the risk classes. Dividing TRAIN data into two subsets: training and testing data with ratio 70:30. Preprocessing of training data: normalizing of data set, selecting structure of map and topology, setting of other options like data filter, delay etc. Executing algorithm which is described in the section 3 while accuracy of corresponding nodes between TRAINDATA and TESTDATA achieves desirable result.

In the figure 3 is presented the run of testing.

444

E. Merkevičius, G. Garšva, and R. Simutis

Fig. 3. Run of increase of Model performance

Figure 3 shows rapidly increase of Model performance: testing on the basis of original weights with 30% of TRAINDATA accuracy of prediction seeks 75.69%, while testing with TESTDATA accuracy of prediction seeks 79.15%; after changing of weights on the 11 step of cycle the accuracy of prediction seeks respectively 87.08% and 84.47%. The number of iterations is related with 8 step of the algorithm, i.e. when the performance of the prediction doesn’t rapidly change the algorithm has stopped. Table 2 presents Performance matrix of TESTDATA. Table 2. Performance matrix of TESTDATA

Actual vs Predicted (Performance Matrix) Predicted (by model) 0 1 Total (units) 73.47 26.53 98 Actual 0 (%) 9.03 90.96 166 Actual 1 (%) Total (%) 84.47% Table 3 presents comparison of importance of ratios in discriminate bankruptcy model before and after changing of weights. The highest impact on results has Total liabilities/Total assets ratio and Net income/Total assets ratio. Changing of weights allows seek the highest accuracy of bankruptcy prediction.

Neuro-Discriminate Model for the Forecasting of Changes

445

Table 3. Changes of variables weights before and after the cycle Name no-ratio weight Net income/Total assets Total liabilities/Total assets S.-t. assets/ S.-t. liabilities no-ratio weight Net income/Total assets Total liabilities/Total assets S.-t. assets/ S.-t. liabilities Performance of bankruptcy prediction (%)

Variables (X0-first period, X1second period) X0_1 X0_2 X0_3 X0_4 X1_1 X1_2 X1_3 X1_4

Weight Weight after before

-4,336 -4,513 5,679 0,004 -4,336 -4,513 5,679 0,004

-4,336 -5,834 5,255 -0,207 -4,336 -4,545 4,617 -0,180

79.15%

84.47%

5 Conclusions •



• • • •

The presented Neuro-discriminate model for forecasting of changes of companies financial standings on the basis of Self-organizing maps also includes multivariate discriminate analysis of bankruptcy and feed-forward supervised neural network; combination of these methods makes original model suitable for forecasting. The other authors which were studied capabilities of SOM in the areas of bankruptcy have estimated only historical and current financial data of the companies and afterwards they have interpreted it for forecasting bankrupt of those companies. We suggest generating SOM as one that could be applied for forecasting of bankruptcy classes for other than trained companies. The presented model works well with real world data, the tests of the model with presented datasets showed accuracy of prediction with more than 84% performance. Methodology of presented model is flexible to adopt every datasets because rules and steps of methodology algorithm are universal. Changing of weights with supervised neural network allows seek the highest accuracy of bankruptcy prediction. Result of prediction is the most important information for creditor showing the trend of a company (positive or negative changes of financial standing of company).

References 1. Atiya.: Bankruptcy prediction for credit risk using neural networks: a survey and new results. IEEE Transactions on Neural Networks, Vol. 12, No. 4, (2001) 929-935 2. Altman, E.: Financial Ratios, Discrimination Analysis and the Prediction of Corporate Bankruptcy. Journal of Finance, (1968)

446

E. Merkevičius, G. Garšva, and R. Simutis

3. Altman. E.: Predicting Financial Distress of Companies: Revisiting the Z-Score and ZETA® Models. (working paper at http://pages.stern.nyu.edu/~ealtman/Zscores.pdf) (2000) 4. Deboeck, G.: Financial Applications of Self-Organizing Maps. American Heuristics Electronic Newsletter, Jan, (1998) 5. Deboeck, G.: Self-Organizing Maps Facilitate Knowledge Discovery In Finance. Financial Engineering News, (1998) 6. EDGAR Online, Inc. http://pro.edgar-online.com (1995-2006) 7. Galindo, J., Tamayo, P.: Credit Risk Assessment Using Statistical and Machine Learning: Basic Methodology and Risk Modeling Applications. Computational Economics. (2000), Vol 15 8. Kiviluoto, K.: Predicting bankruptcies with the self-organizing map. Neurocomputing, Vol. 21, (1998),191–201 9. Kohonen, T.: The Self-Organizing Map. Proceedings of the IEEE, 78:1464-1480 10. Ohlson, J. A.: Financial Ratios and the Probabilistic Prediction of Bankruptcy. Journal of Accounting Research (Spring). (1980), 109-131 11. Martin-del-Prio, K., Serrano-Cinca, Self-Organizing Neural Network: The Financial State of Spanish Companies. In Neural Networks in Finance and Investing. Using Artificial Intelligence to Improve Real-World Performance. R.Trippi, E.Turban, Eds. Probus Publishing, (1993), 341-357 12. Merkevičius, E., Garšva, G., Girdzijauskas, S.: A Hybrid SOM-Altman Model for Bankruptcy Prediction. International Conference on Computational Science (4), Lecture Notes in Computer Science, 3994, 364-371, (2006), ISSN 0302-9743 13. Nørgaard, M.: Neural Network Based System Identification Toolbox Version 2. Technical Report 00-E-891, Department of Automation Technical University of Denmark. (2000). http://kalman.iau.dtu.dk/research/control/nnsysid.html 14. Shumway, T.: Forecasting Bankruptcy More Accurately: A Simple Hazard Model, Journal of Business, Vol. 74, No. 1 (2001), 101-124 15. Vesanto, J., Himberg, J., Alhoniemi, E., Parhankangas, J.: SOM toolbox for Matlab 5, Technical report A57 (2000), Helsinki University of Technology, Finland 16. Zmijewski, M. E.: Methodological Issues Related to the Estimation of Financial Distress Prediction Models. Journal of Accounting Research 24 (Supplement): (1984) 59-82

A New Computing Method for Greeks Using Stochastic Sensitivity Analysis Masato Koda Systems and Information Engineering, University of Tsukuba 1-1-1 Tennou-Dai, Tsukuba, 305-8573 Japan [email protected]

Abstract. In a risk management of derivative securities, Greeks, i.e. sensitivity coefficients, are important measures of market risk to evaluate the impact of misspecification of some stochastic model on the expected payoff function. We investigate a new computing method for Greeks based on Malliavin calculus without resort to a direct differentiation of the complex payoff functions. As a result, a new relation between Γ and Δ is obtained for the Asian option. Keywords: Greeks, Malliavin calculus, Monte Carlo simulation.

1

Introduction

We consider a stochastic integral equation in a well-defined Black-Scholes set-up, t t (1) St = S0 + 0 rSτ dτ + 0 σSτ dWτ , where St is the price of underlying asset with S0 denoting the present (initial) value, r denotes the risk-free interest rate, σ is the volatility, and (Wt )0≤t≤T is a standard Brownian motion. It should be noted that, in European options, we have a closed solution to (1) as ST = S0 exp(μT + σWT ), where μ = r − σ 2 /2 for a fixed expiration or maturity time T . We are interested in studying how to evaluate the sensitivity with respect to model parameters, e.g., present price S0 , volatility σ, etc., of the expected payoff E[e−rT Φ(ST )],

(2)

for an exponentially discounted value of the payoff function Φ(ST ), where E[·] denotes the expectation operator. The sensitivity of more sophisticated payoff functions including path-dependent Asian-type options like E[e−rT Φ(

1T St dt)], T 0

(3)

be treated in a similar manner along the lines that are investigated in this study. In finance, this is the so-called model risk problem. Commonly referred to as Greeks, sensitivities in financial market are typically defined as the partial Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 447–454, 2007. c Springer-Verlag Berlin Heidelberg 2007 

448

M. Koda

derivatives of the expected payoff function with respect to underlying model parameters. In general, finite difference approximations are heavily used to simulate Greeks. However, the approximation soon becomes inefficient particularly when payoff functions are complex and discontinuous. To overcome this difficulty, Broadie and Glasserman [1] proposed a method to put the differential of the payoff function inside the expectation operator required to evaluate the sensitivity. But this idea is applicable only when the density of the random variable involved is explicitly known. Recently, Fournie et al. [2] suggested the use of Malliavin calculus, by means of integration by parts, to shift the differential operator from the expected payoff to the underlying diffusion (e.g., Gaussian) kernel, introducing a weighting function. Another examples that are similar to the present study and explored by the present author (e.g., Ref. [3], [7]) but not covered in this paper are models involving a step function and non-smooth objective functions. In these studies, the stochastic sensitivity analysis technique based on the Novikov’s identity is used instead of Malliavin calculus. In this paper, we present a new constructive approach for the computation of Greeks in financial engineering. The present approach enables the simulation of Greeks without resort to direct differentiation of the complex or discontinuous payoff functions.

2

Malliavin Calculus

Let R be the space of random variables of the form F = f (Wt1 , Wt2 , · · · , Wtn ), where f is smooth and Wt denotes the Brownian motion [6]. For a smooth random variable F ∈ R, we can define its derivative DF = Dt F , where the differential operator D is closable. Since D operates on random variables by differentiating functions in the form of partial derivatives, it shares the familiar chain rule property, Dt (f (F )) = ∇f (F ) · Dt F = f  (F )Dt F , and others. We denote by D∗ the Skorohod integral, defined as the adjoint operator of D. If u belongs to Dom(D∗ ), then D∗ (u) is characterized by the following integration by parts (ibp) formula: T E[F D∗ (u)] = E[ 0 (Dt F )ut dt].

(4)

Note that (4) gives a duality relationship to link operators D and D∗ . The adjoint operator D∗ behaves like a stochastic integral. In fact, if ut is an adapted process, then the Skorohod integral coincides with the Ito integral: i.e., D∗ (u) = T ut dWt . In general, one has 0 D∗ (F u) = F D∗ (u) −

T 0

(Dt F )ut dt.

(5)

A heuristic derivation of (5) is demonstrated here. Let us assume that F and G are any two smooth random variables, and ut a generic process, then by product rule of D one has

A New Computing Method for Greeks Using Stochastic Sensitivity Analysis

449

T T T E[GF D∗ (u)] = E[ 0 Dt (GF )ut dt] = E[ 0 G(Dt F )ut dt] + E[ 0 (Dt G)F ut dt] T = E[G 0 (Dt F )ut dt] + E[GD∗ (F u)], T which implies that E[GD∗ (F u)] = E[G(F D∗ (u) − 0 (Dt F )ut dt)] for any random variables G. Hence, we must have (5). In the present study, we frequently use the following formal relationship to remove the derivative from a (smooth) random function f as follows: E[∇f (X)Y ] = E[f  (X)Y ] = E[f (X)HXY ],

(6)

where X, Y , and HXY are random variables. It is noted that (6) can be deduced from the integration by parts formula (4), and we have an explicit expression for HXY as   Y ∗ . (7) HXY = D T Dt Xdt 0 If higher order derivatives are involved then one has to repeat the procedure (6) iteratively.

3

European Options

In the case of European option whose payoff function is defined by (2), the essence of the present method is that the gradient of the expected (discounted) payoff, ∇E[e−rT Φ(ST )], is evaluated by putting the gradient inside the expectation, i.e., E[e−rT ∇Φ(ST )], which involves computations of ∇Φ(ST ) = Φ (ST ) and ∇ST . Further, applying Malliavin calculus techniques, the gradient is rewritten as E[e−rT Φ(ST )H] for some random variable H. It should be noted, however, that there is no uniqueness in this representation since we can add to H any random variables that are orthogonal to ST . In general, H involves Ito or Skorohod integrals. 3.1

Delta

Now we compute Delta, Δ, the first-order partial differential sensitivity coefficient of the expected outcome of the option, i.e., (2), with respect to the initial asset value S0 : Δ=

∂ ∂ST e−rT E[e−rT Φ(ST )] = e−rT E[Φ (ST ) ]= E[Φ (ST )ST ]. ∂S0 ∂S0 S0

Then, with X = Y = ST in (7), we perform the integration by parts (ibp) to give    e−rT ST e−rT ∗ . (8) E[Φ(ST )HXY ] = E Φ(ST )D Δ= T S0 S0 Dt ST dt 0

450

M. Koda

T T T Since 0 Dt ST dt = σ 0 ST Dt WT dt = σST 0 1{t≤T } = σT ST , we evaluate the stochastic integral   D∗ (1) WT 1 ST ∗ )= = = D∗ ( HXY = D T σT σT σT 0 Dt ST dt with the help of (5) applied to u = 1 (a constant process which is adapted and Ito integral yields D∗ (1) = WT ). Then the final expression for Δ reads Δ= 3.2

e−rT E[Φ(ST )WT ]. σT S0

(9)

Vega

Next Greek Vega, V , is the sensitivity of (2) with respect to the volatility σ: V =

∂ ∂ST E[e−rT Φ(ST )] = e−rT E[Φ (ST ) ] = e−rT E[Φ (ST )ST {WT − σT }]. ∂σ ∂σ

Then, utilizing (6) and (7) again with X = ST and Y = ST (WT − σT ), we apply the ibp to give     −rT ∗ WT T −σT ) V = e−rT E[Φ(ST )HXY ]e−rT E Φ(St )D∗ SÊTT(W = e E Φ(S − 1 . T )D σT D S dt 0

t

T

So, we evaluate the stochastic integral as   WT 1 ∗ 1 ∗ ∗ −1 = D (WT ) − D∗ (1) = D (WT ) − WT . HXY = D σT σT σT With the help of (4) applied to u = 1 (adapted process) and F = WT , we have T T D∗ (WT ) = WT2 − 0 Dt WT dt = WT2 − 0 dt = WT2 − T. If we bring together the partial results obtained above, we derive the final expression

2 WT 1 − WT − V = e−rT E Φ(ST ) . (10) σT σ 3.3

Gamma

The last Greek Gamma, Γ , involves a second-order derivative, Γ =

∂2 e−rT E[e−rT Φ(ST )] = E[Φ (ST )ST2 ]. 2 ∂S0 S02

Utilizing (6) and (7) with X = ST and Y = ST2 , we obtain after a first ibp      e−rT ST2 ST e−rT  ∗  ∗ = E Φ (ST )D E Φ (ST )D Γ = . T S02 S02 σT Dt ST dt 0

A New Computing Method for Greeks Using Stochastic Sensitivity Analysis

451

With the help of (5) applied to u = 1/σT (constant adapted process) and F = ST , we have     ST WT 1 T ST ∗ ∗ D (1) − −1 . D Dt ST dt = ST = σT σT σT 0 σT Then, repeated application of (6) and (7) with X= ST and Y = ST (WT /σT− 1), the second ibp yields  

 T

 e−rT −rT WT ∗ Ê T ST Γ= e S 2 E Φ (ST )ST W − 1 = E Φ(S )D − 1 . 2 T σT σT S D S dt 0

0

0

t

T

With the help of (5) as before, we can evaluate the stochastic integral as      

WT 1 ∗ WT 1 WT2 1 ST ∗ D −1 = D −1 = − WT − . T σT σT σT σT σT σ 0 Dt ST dt If we combine the results obtained above, the final expression becomes    e−rT WT2 1 Γ = − WT − . E Φ(ST ) σT S02 σT σ

(11)

Comparing (11) with (10), we find the following relationship between V and Γ : V . (12) Γ = σT S02 Since we have closed solutions for all the Greeks in European options, we can easily check the correctness of the above results.

4

Asian Options

In the case of Asian option whose payoff functional is defined by (3), the essence of the present approach is again that the gradient of the expected (discounted) T T payoff is rewritten as E[e−rT ∇Φ( T1 0 St dt)] = e−rT E[Φ( T1 0 St dt)H], for some random variable H. Different from the European options, however, we do not have a known closed solution in this case. 4.1

Delta

Delta in this case is given by 1T e−rT 1T 1T ∂ E[e−rT ∇Φ( 0 St dt)] = E[Φ ( 0 St dt) 0 St dt]. ∂S0 T S0 T T T Utilizing (6) and (7) with X = Y = 0 St dt/T , we may apply the ibp to give Δ=

−rT

Δ= e S0 E Φ



1 T

T 0

   −rT     ÊT  S dt T St dt) D∗ Ê T DY Xdt = e S0 E Φ T1 0 St dt D∗ σ Ê0T tSt dt , 0

t

0

t

452

M. Koda

T T T where we have used the relationship 0 0 Dv St dvdt= σ 0 tSt dt. With the help T T of (5) applied to u = 1/σ (constant adapted process) and F = 0 St dt/ 0 tSt dt, we may obtain       1T WT < T2 > 1 e−rT + −1 , (13) E Φ St dt Δ= S0 T 0

σ

T T T where we have used the relationship 0 0 tDv St dvdt = σ 0 t2 St dt, and where we defined T T 2 tSt dt t St dt < T >= 0T and < T 2 >= 0 T . S dt S dt t t 0 0 4.2

Vega

Vega in this case becomes     ∂ 1T 1  T ∂St 1T −rT  V = E e−rT ∇Φ dt S dt = e E Φ S dt t t ∂σ T 0 T 0 T 0 ∂σ   1T 1T = e−rT E Φ St dt St {Wt − tσ}dt . 0 T T 0 T As before, with the help of (6) and (7) applied to X = 0 St dt/T and Y = T St (Wt − tσ)dt/T , we have 0 V = e−rT E Φ



1 T

T 0

  St dt D∗ Ê T

Y 0 Dt Xdt

    ÊT  S W dt T = e−rT E Φ T1 0 St dt D∗ σ0Ê T ttS tdt − 1 , 0

t

which, with the help of (5), yields the following expression    ÊT 2 Ê 1  Ê T Ê T St Wt dtdWτ t St dt 0T St Wt dt −rT 0 0Ê 0 Ê V =e E Φ T St dt + − WT . (14) σ T tS dt ( T tS dt)2 0

4.3

t

0

t

Gamma

Gamma involves a second-order derivative, T 1T e−rT 1T ∂2 −rT E[e ∇Φ( S dt)] = E[Φ ( 0 St dt)( 0 St dt/T )2 ]. t 2 2 0 ∂S0 T S0 T T T Application of (6) with X = 0 St dt/T and Y = ( 0 St dt/T )2 , and utilizing a close variant of (7) (see Ref. [4], [5]), i.e.,   St Y ∗ , HXY = D T 0 Sv Dv Xdv Γ =

we may obtain after a first integration by parts

A New Computing Method for Greeks Using Stochastic Sensitivity Analysis

453

    e−rT St Y  1 T ∗ Γ = E Φ ( 0 Sτ dτ )D T S02 T Sv Dv Xdv 0   −rT 2St 1T e E Φ ( 0 Sτ dτ )D∗ , (15) = 2 S0 T σT T t T T T where the relation 0 0 Sv Dv St dvdt = σ 0 0 St Sv dvdt = σ2 ( 0 St dt)2 is used. Further, we obtain −rT

Γ= 2e E Φ ( T1 σT S 2 0

T 0

Sτ dτ )

T 0

  T T −rT St dWt = σ2e2 T S 2 E Φ ( T1 0 St dt)(ST − S0 − r 0 St dt) , 0

which involves (1). T Then, repeated application of (6) and (7) with X = 0 St dt/T and Y = T ST − S0 − r 0 St dt, the second integration by parts yields Γ =

2e−rT 2 σ 2 S0

E Φ( T1

=

2e−rT 2 σ 2 S0

E Φ( T1

T 0

T 0

St dt)D∗ St dt)D∗

 

Ê

ST −S0 −r 0T St dt Ê σ 0T tSt dt ST −S0 Ê σ 0T tSt dt







2re−rT 2 σ 2 S0

E Φ( T1

T 0

St dt)D∗



T

 . dt

ÊT St dt Ê0T

σ 0 tSt

With the help of (5) applied to u = 1/σ and F = (ST − S0 )/ 0 tSt dt, the present approach yields a brand new estimate which gives an explicit relationship between Γ and Δ as follows:    S 2r − S 1T 2e−rT T 0 − 2 Δ Γ = 2 2 E Φ( 0 St dt)D∗ T σ S0 T σ S0 σ 0 tSt dt     2e−rT WT < T2 > 1T 1 = 2 2 E Φ( 0 St dt)  T + (ST − S0 ) − T ST σ S0 T σ

tSt dt 0

2r − 2 Δ. σ S0

5

(16)

Monte Carlo Simulations of Asian Option

Here, we present the simulation results with parameters r = 0.1, σ = 0.25, T = 0.2 (in years), and S0 = K = 100 (in arbitrary cash units) where K denotes the strike price. We have divided the entire interval of integration into 252 pieces, representing the trading days in a year. In Fig. 1, we compare the convergence behavior of Δ with the result obtained by Broadie and Glasserman [1]. The result indicates a fairy good convergence to the steady-state value that is attained at 10, 000th iteration stage in [1]. In Fig. 2, we compare the simulation result of V with the one that is obtained at 10, 000th iteration stage in [1]. This indicates that some noticeable bias may remain in the present Monte Carlo simulation, and further study may be necessary to analyze and reduce the bias involved. Although we cannot compare the proposed method with others, the present simulations may provide most detailed and extensive results currently available for Greeks of Asian option.

454

M. Koda

Fig. 1. Estimated Delta

6

Fig. 2. Estimated Vega

Conclusions

We have presented a stochastic sensitivity analysis method, in particular, a constructive approach for computing Greeks in finance using Malliavin calculus. As a result, a new relation between Γ and Δ is obtained for the Asian option. The present approach may be useful when the random variables are smooth in the sense of stochastic derivatives. It is, however, necessary to further investigate and improve Monte Carlo procedures to reduce the bias involved in the simulation of Vega in Asian-type options and other sophisticated options.

References 1. Broadie, M., Glasserman, P.: Estimating security price derivatives using simulation. Management Science 42 (1996) 269-285 2. Fournie, E., Lasry, J.M., Lebuchoux, L., Lions, P.L.: An application of Malliavin calculus to Monte Carlo methods in Finance II. Finance and Stochastics 5 (2001) 201-236 3. Koda, M., Okano, H.: A new stochastic learning algorithm for neural networks. Journal of the Operations Research Society of Japan 43 (2000) 469-485 4. Koda, M., Kohatsu-Higa, A., Montero, M.: An Application of Stochastic Sensitivity Analysis to Financial Engineering. Discussion Paper Series, No. 980, Institute of Policy and Planning Sciences, University of Tsukuba (2002) 5. Montero, M., Kohatsu-Higa, A.: Malliavin calculus applied to finance. Physica A 320 (2003) 548-570 6. Nualert, D.: The Malliavin Calculus and Related Topics. Springer, New York (1995) 7. Okano, H., Koda, M.: An optimization algorithm based on stochastic sensitivity analysis for noisy objective landscapes. Reliability Engineering and System Safety, 79 (2003) 245-252

Application of Neural Networks for Foreign Exchange Rates Forecasting with Noise Reduction Wei Huang1,3, Kin Keung Lai2,3, and Shouyang Wang4 1 School

of Management, Huazhong University of Science and Technology WuHan, 430074, China [email protected] 2 College of Business Administration, Hunan University Changsha 410082, China 3 Department of Management Sciences, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong {weihuang,mskklai}@cityu.edu.hk 4 Institute of Systems Science, Academy of Mathematics and Systems Sciences Chinese Academy of Sciences, Beijing, 100080, China [email protected]

Abstract. Predictive models are generally fitted directly from the original noisy data. It is well known that noise can seriously limit the prediction performance on time series. In this study, we apply the nonlinear noise reduction methods to the problem of foreign exchange rates forecasting with neural networks (NNs). The experiment results show that the nonlinear noise reduction methods can improve the prediction performance of NNs. Based on the modified DieboldMariano test, the improvement is not statistically significant in most cases. We may need more effective nonlinear noise reduction methods to improve prediction performance further. On the other hand, it indicates that NNs are particularly well appropriate to find underlying relationship in the environment characterized by complex, noisy, irrelevant or partial information. We also find that the nonlinear noise reduction methods work more effectively when the foreign exchange rates are more volatile.

1 Introduction Foreign exchange rates exhibit high volatility, complexity and noise that result from the elusive market mechanism generating daily observations [1]. It is certainly very challenging to predict foreign exchange rates. Neural networks (NNs) have been widely used as a promising alternative approach for a forecasting task because of several distinguishing features [2]. Several design factors significantly affect the prediction performance of neural networks [3]. Generally, NNs learn and generalize directly from noisy data with the faith on ability to extract the underlying deterministic dynamics from the noisy data. However, it is well known that the model's generalization performance will be poor unless we prevent the model from over-learning. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 455–461, 2007. © Springer-Verlag Berlin Heidelberg 2007

456

W. Huang, K.K. Lai, and S. Wang

Given that most financial time series contain dynamic noise, it is necessary to reduce noise in the data with nonlinear methods before fitting the prediction models. However, not much work has been done [4]. In this study, we employ two nonlinear noise reduction methods to alleviate these problems. The remainder of this paper is organized as follows. In Section 2, we give a brief introduction to the two nonlinear noise reduction methods. Section 3 presents the experiment design. Section 4 discusses the empirical experiment result. Finally, Section 5 offers some concluding remarks.

2 Nonlinear Noise Reduction Conventional linear filtering in the time or Fourier domain can be very powerful as long as nonlinear structures in the data are unimportant. However, nonlinearity can not be fully characterized by second order statistics like the power spectrum. Nonlinear noise reduction does not rely on frequency information in order to define the distinction between signal and noise. Instead, structure in the reconstructed phase space will be exploited. Here we want to concentrate on two nonlinear noise reduction methods that represent the geometric structure in phase space by local approximation. The former does so to constant order, while the latter uses local linear subspaces plus curvature corrections. Interested reader can refer to the articles of review character [5-7]. 2.1 Simple Nonlinear Noise Reduction (SNL) SNL replaces the central coordinate of each embedding vector by the local average of this coordinate:

∑x

zn = whrere

xk ∈U nδ

k

U nδ

(1)

U nδ is neighborhood formed in phase space containing all points like x k ,

such that

x k − x n < δ . This noise reduction method amounts to a locally constant

approximation of the dynamics and is based on the assumption that the dynamics is continuous. 2.2 Locally Projective Nonlinear Noise Reduction (LP) LP rests on hypotheses that the measured data is composed of the output of a lowdimensional dynamical system and of random or high-dimensional noise. This means that in an arbitrarily high-dimensional embedding space the deterministic part of the data would lie on a low-dimensional manifold, while the effect of the noise is to spread the data off this manifold. If we suppose that the amplitude of the noise is

Application of NNs for Foreign Exchange Rates Forecasting with Noise Reduction

457

sufficiently small, we can expect to find the data distributed closely around this manifold. The idea of the projective nonlinear noise reduction scheme is to identify the manifold and to project the data onto it. Suppose the dynamical system forms a q-dimensional manifold Μ containing the trajectory. According to the embedding theorems, there exists a one-to-one image of the attractor in the embedding space, if the embedding dimension is sufficiently high. Thus, if the measured time series were not corrupted with noise, all the embedding vectors

~ S n would lie inside another manifold Μ in the embedding space. Due to the

noise this condition is no longer fulfilled. The idea of the locally projective noise

S n there exists a correction Θ n , with Θ n small, ~ ~ in such a way that S n − Θ n ∈ Μ and that Θ n is orthogonal on Μ . Of course a

reduction scheme is that for each

projection to the manifold can only be a reasonable concept if the vectors are

~

embedded in spaces which are higher dimensional than the manifold Μ . Thus we have to over-embed in m-dimensional spaces with m>q. The notion of orthogonality depends on the metric used. Intuitively one would think of using the Euclidean metric. But this is not necessarily the best choice. The reason is that we are working with delay vectors which contain temporal information. Thus even if the middle parts of two delay vectors are close, the late parts could be far away from each other due to the influence of the positive Lyapunov exponents, while the first parts could diverge due the negative ones. Hence it is usually desirable to correct only the center part of delay vectors and leave the outer parts mostly unchanged, since their divergence is not only a consequence of the noise, but also of the dynamics itself. It turns out that for most applications it is sufficient to fix just the first and the last component of the delay vectors and correct the rest. This can be expressed in terms of a metric tensor P which we define to be

⎧1 1 < i = j < m Pij = ⎨ ⎩0 otherwise where

(2)

m is the dimension of the over-embedded delay vectors.

Thus we have to solve the minimization problem

min

∑Θ P i

−1

Θi

i

s.t.

a ni ( S n − Θ n ) + bni = 0, for i = q + 1,..., m

a ni Pa nj = δ ij where

~ ani are the normal vectors of Μ at the point S n − Θ n .

(3)

458

W. Huang, K.K. Lai, and S. Wang

3 Experiments Design 3.1 Neural Network Models In this study, we employ one of the widely used neural networks models, the threelayers back-propagation neural network (BPNN), for foreign exchange rates forecasting. The activation function used for all hidden nodes is the logistic function, while the linear function is employed in the output node. The number of input nodes is a very important factor in neural network analysis of a time series since it corresponds to the number of past lagged observations related to future values. To avoid introducing a bias in results, we choose the number of input nodes as 3, 5, 7 and 9, respectively. Because neural networks with one input node are too simple to capture the complex relationships between input and output, and it is rarely seen in the literatures that the number of input nodes is more than nine. Generally speaking, too many nodes in the hidden layer produce a network that memorizes the input data and lacks the ability to generalize. Another consideration is that as the number of hidden nodes in a network is increased, the number of variables and terms are also increased. If the network has more degrees of freedom (the number of connection weights), more training samples are needed to constrain the neural network. It has been shown that the in-sample fit and the out-of-sample forecasting ability of NNs are not very sensitive to the number of hidden nodes [8]. Parsimony is a principle for designing NNs. Therefore, we use four hidden nodes in this study. 3.2 Random Walk Model The weak form of efficient market theory describes that prices always fully reflect the available information, that is, a price is determined by the previous value in the time series because all the relevant information is summarized in that previous value. An extension of this theory is the random walk (RW) model. The RW model assumes that not only all historic information is summarized in the current value, but also that increments—positive or negative—are uncorrelated (random), and balanced, that is, with an expected value equal to zero. In other words, in the long run there are as many positive as negative fluctuations making long term predictions other than the trend impossible. The random walk model uses the actual value of current period to predict the future value of next period as follows:

yˆ t +1 = y t

(4)

where yˆ t +1 is the predicted value of the next period; y t is the actual values of current period. 3.3 Performance Measure

We employ root of mean squared error (RMSE) to evaluate the prediction performance of neural networks as follows:

Application of NNs for Foreign Exchange Rates Forecasting with Noise Reduction

RMSE =

2 ∑ ( y t − yˆ t )

t

459

(5)

T

where y t is the actual value; yˆ t is the predicted value; T is the number of the predictions. 3.4 Data Preparation

From Pacific Exchange Rate Service provided by Professor Werner Antweiler, University of British Columbia, Canada, we obtain 3291 daily observations of U.S. dollar against the British Pound (GBP) and Japanese Yen (JPY) covering the period the period from Jan 1990 to Dec, 2002. We take the natural logarithmic transformation to stabilize the time series. First, we produce the testing sets for each neural network models by selecting 60 patterns of the latest periods from the three datasets, respectively. Then, we produce the appropriate training sets for each neural networks model from the corresponding left data by using the method in [9].

4 Experiments Results Table 1 show the prediction performances of the random walk model, which are used as benchmarks of prediction performance of foreign exchange rates. In Table 2 and 3, the RMSE of noisy data is largest, and RMSE of LP is least in the same row. It indicates that noise reduction methods actually improve the prediction performance of NNs. Further, LP is more effective than SNL in reducing noise of exchange rates. We also apply the modified Diebold-Mariano test [10] to examine whether the two noise reduction methods can improve NNs financial forecasting significantly. From the test statistic values shown in Table 4 and 5, only in the prediction of JPY, the NNs with 9 input nodes using data filtered by LP outperform those using noisy data significantly at 20% level (the rejection of equality of prediction mean squared errors is based on critical value of student's t-distribution with 69 degrees of freedom, namely 1.294 at 20% level). Perhaps there is still noise in exchange rates time series after using the noise reduction methods. We look forward more effective noise reduction methods in the future. On the other hand, it also implies that NNs are useful to extract the underlying deterministic dynamics from noisy data at present stage. In addition, the test statistic value of SNL for GBP is less than that of SNL for JPY in the same row. Such a pattern is observed between LP for GBP and LP for JPY. JPY is more volatile than GBP, that is, there are more noises in JPY than in GBP to be filtered. So the improvement for JPY after noise reduction is more significant than GBP. Table 1. The prediction performance of the random walk models

RMSE of GBP 0.005471

RMSE of JPY 0.007508

460

W. Huang, K.K. Lai, and S. Wang

Table 2. The prediction performance of NNs using data with noise, filtered by SNL and filtered by LP, respectively (GBP)

#Input nodes 3 5 7 9

Noisy data 0.005471 0.004491 0.004496 0.0054671

Data filtered by SNL 0.005465 0.004473 0.004494 0.005464

Data filtered by LP 0.005461 0.004457 0.004467 0.00541

Table 3. The prediction performance of NNs using data with noise, filtered by SNL and filtered by LP, respectively (JPY)

#Input nodes 3 5 7 9

Noisy data 0.007438 0.006348 0.006358 0.007293

Data filtered by SNL 0.007018 0.00616 0.006279 0.006811

Data filtered by LP 0.006719 0.005989 0.006067 0.006399

Table 4. The test statistic value of equality of prediction errors between noisy and filtered data (GBP)

#Input nodes 3 5 7 9

Data filtered by SNL 0.012 0.043 0.005 0.007

Data filtered by LP 0.021 0.093 0.087 0.129

Table 5. The test statistic value of equality of prediction errors between noisy and filtered data (JPY)

#Input nodes 3 5 7 9

Data filtered by SNL 0.904 0.391 0.162 0.879

Data filtered by LP 1.122 0.733 0.509 1.605

5 Conclusions In the paper, we apply the two nonlinear noise reduction methods, namely SNL and LP, to the foreign exchange rates forecasting with neural networks. The experiment results show that SNL and LP can improve the prediction performances of NNs. The improvement is not statistically significant based on the modified Diebold-Mariano test at 20% level. Especially, LP performs better than SNL in reducing noise of foreign exchange rates. The noise reduction methods work more effectively on JPY than

Application of NNs for Foreign Exchange Rates Forecasting with Noise Reduction

461

on GBP. In the future work, we will employ more effective nonlinear noise reduction methods to improve prediction performance further. On the other hand, it indicates that NNs are particularly well appropriate to find underlying relationship in an environment characterized by complex, noisy, irrelevant or partial information. In the most cases, NNs outperform the random walk model.

Acknowledgements The work described in this paper was supported by Strategic Research Grant of City University of Hong Kong (SRG No.7001806) and the Key Research Institute of Humanities and Social Sciences in Hubei Province-Research Center of Modern Information Management.

References 1. Theodossiou, P.: The stochastic properties of major Canadian exchange rates. The Financial Review, 29(1994) 193-221 2. Zhang, G., Patuwo, B.E. and Hu, M.Y.: Forecasting with artificial neural networks: the state of the art. International Journal of Forecasting, 14(1998) 35-62 3. Huang, W., Lai, K.K., Nakamori, Y. and Wang, S.Y:. Forecasting foreign exchange rates with artificial neural networks: a review. International Journal of Information Technology & Decision Making, 3(2004) 145-165 4. Soofi, A., Cao, L.: Nonlinear forecasting of noisy financial data. In Modeling and Forecasting Financial Data: Techniques of Nonlinear Dynamics, Soofi, A. and Cao, L. (Eds), Boston: Kluwer Academic Publishers, (2002) 455-465 5. Davies, M.E.: Noise reduction schemes for chaotic time series. Physica D, 79(1994) 174-192 6. Kostelich, E.J. and Schreiber, T.: Noise reduction in chaotic time series data: A survey of common methods. Physical Review E, 48(1993) 1752-1800 7. Grassberger, P., Hegger, R., Kantz, H., Schaffrath, C. and Schreiber, T.: On noise reduction methods for chaotic data, CHAOS. 3(1993) 127 8. Zhang, G. and Hu, M.Y.: Neural network forecasting of the British Pound/US Dollar exchange rate. Journal of Management Science, 26(1998) 495-506 9. Huang, W., Nakamori, Y., Wang, S.Y. and Zhang, H.: Select the size of training set for financial forecasting with neural networks. Lecture Notes in Computer Science, Vol. 3497, Springer-Verlag Berlin Heidelberg (2005) 879–884 10. Harvey, D., Leybourne, S. and P. Newbold: Testing the Equality of Prediction Mean Squared Errors. International Journal of Forecasting 13(1997) 281-91

An Experiment with Fuzzy Sets in Data Mining David L. Olson1, Helen Moshkovich2, and Alexander Mechitov 1

University of Nebraska, Department of Management, Lincoln, NE USA 68588-0491 [email protected] 2 Montevallo University, Comer Hall, Montevallo, AL USA 35115 [email protected], [email protected]

Abstract. Fuzzy modeling provides a very useful tool to deal with human vagueness in describing scales of value. This study examines the relative error in decision tree models applied to a real set of credit card data used in the literature, comparing crisp models with fuzzy decision trees as applied by See5, and as obtained by categorization of data. The impact of ordinal data is also tested. Modifying continuous data was expected to degrade model accuracy, but was expected to be more robust with respect to human understanding. The degree of accuracy lost by See5 fuzzification was minimal (in fact more accurate in terms of total error), although bad error was worse. Categorization of data yielded greater inaccuracy. However, both treatments are still useful if they better reflect human understanding. An additional conclusion is that when categorizing data, care should be taken in setting categorical limits. Keywords: Decision tree rules, fuzzy data, ordinal data.

1 Introduction Classification tasks in business applications may be viewed as tasks with classes reflecting the levels of the same property. Evaluating creditworthiness of clients is rather often measured on an ordinal level as, e.g., {excellent}, {good}, {acceptable}, or {poor} (Ben David et al., 1989.) Applicants for a job are divided into accepted and rejected, but sometimes there may be also a pool of applicants left for further analysis as they may be accepted in some circumstances [2], [11]. Different cars may be divided into groups {very good}, {good}, {acceptable}, {unacceptable}. This type of tasks is called “ordinal classification” [5]. The peculiarity of the ordinal classification is that data items with {better} qualities (characteristics) logically are to be presented in {better} classes: the better the article in its characteristics the closer it is to the class {accepted}. It was shown in[6] that taking into account possible ordinal dependence between attribute values and final classes may lead to a smaller number of rules with the same accuracy and enable the system to extend obtained rules to instances not presented in the training data set. There are many data mining tools available, to cluster data, to help analysts find patterns, to find association rules. The majority of data mining approaches to classification tasks, work with numerical and categorical information. Not many data mining techniques take into account ordinal data features. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 462–469, 2007. © Springer-Verlag Berlin Heidelberg 2007

An Experiment with Fuzzy Sets in Data Mining

463

Real-world application is full of vagueness and uncertainty. Several theories on managing uncertainty and imprecision have been advanced, to include fuzzy set theory [13], probability theory [8], rough set theory [7] and set pair theory [14], [15]. Fuzzy set theory is used more than the others because of its simplicity and similarity to human reasoning. Although there is a wide variety of different approaches within this field, many view advantages of fuzzy approach in data mining as an …”interface between a numerical scale and a symbolic scale which is usually composed of linguistic terms” [4]. Fuzzy association rules described in linguistic terms help increase the flexibility for supporting users in making decisions. Fuzzy set theory is being used more and more frequently in intelligent systems. A fuzzy set A in universe U is defined as A = {(x, μ A ( x)) | x ∈U , μ A ( x) ∈[0,1]} , where indicating the degree of membership of

μ A (x)

is a membership function

x to A . The greater the value of μ A ( x) , the

more x belongs to A . Fuzzy sets can also be thought of as an extension of the traditional crisp sets and categorical/ordinal scales, in which each element is either in the set or not in the set (a membership function of either 1 or 0.) Fuzzy set theory in its many manifestations (interval-valued fuzzy sets, vague sets, grey-related analysis, rough set theory, etc.) is highly appropriate for dealing with the masses of data available. This paper will review some of the general developments of fuzzy sets in data mining, with the intent of seeing some of the applications in which they have played a role in advancing the use of data mining in many fields. It will then review the use of fuzzy sets in two data mining software products, and demonstrate the use of data mining in an ordinal classification task. The results will be analyzed through comparison with the ordinal classification model. Possible adjustments of the model to take into account fuzzy thresholds in ordinal scales will be discussed.

2 Fuzzy Set Experiments in See5 See5, a decision tree software, allows users to select options to soften thresholds through selecting a fuzzy option. This option would insert a buffer at boundaries (which is how PolyAnalyst works as well). The buffer is determined by the software based on analysis of sensitivity of classification to small changes in the threshold. The treatment from there is crisp, as opposed to fuzzy. Thus, in decision trees, fuzzy implementations seem to be crisp models with adjusted set boundaries. See5 software was used on a real set of credit card data[10]. This dataset had 6,000 observations over 64 variables plus an outcome variable indicating bankruptcy or not (variables defined in [10]). Of the 64 independent variables, 9 were binary and 3 categorical. The problem can be considered to be an ordinal classification task as the two final classes are named as “GOOD” and “BAD” with respect to financial success. This means that majority of the numerical and categorical attributes (including binary ones) may be easily characterized by more preferable values with respect to “GOOD” financial success. The dataset was balanced to a degree, so that it contained 960 bankrupt outcomes (“BAD”) and 5040 not bankrupt (“GOOD.”) Winnowing was used in See5, which

464

D.L. Olson, H. Moshkovich, and A. Mechitov

reduced the number of variables used in models to about 20. Using 50 percent of the data for training, See5 selected 3000 observations at random as the training set, which was then tested on the remaining 3000 observations in the test set. Minimum support on See5 was varied over the settings of 10, 20, and 30 cases. Pruning confidence factors were also varied, from 10% (greater pruning), 20%, 30%, and 40% (less pruning). Data was locked within nominal data runs, so that each treatment of pruning and minimum case settings was applied to the same data within each repetition. Five repetitions were conducted (thus there were 12 combinations repeated five times, or 60 runs). Each run was replicated for original crisp data, original data using fuzzy settings, ordinal crisp data, ordinal data using fuzzy settings, and categorical data (See5 would have no difference between crisp and fuzzy settings). Rules obtained were identical across crisp and fuzzy models, except fuzzy models had adjusted rule limits. For instance, in the first run, the following rules were obtained: CRISP MODEL: RULE 1: RULE 2: RULE 3:

RULE 4:

RULE 5 ELSE

IF RevtoPayNov ≤ 11.441, then GOOD IF RevtoPayNov > 11.441 AND IF CoverBal3 = 1 then GOOD IF RevtoPayNov > 11.441 AND IF CoverBal3 = 0 AND IF OpentoBuyDec > 5.35129 then GOOD IF RevtoPayNov > 11.441 AND IF CoverBal3 = 0 AND IF OpentoBuyDec ≤ 5.35129 AND IF NumPurchDec ≤ 2.30259 then BAD GOOD

The fuzzy model for this data set: IF RevtoPayNov ≤ 11.50565 then GOOD IF RevtoPayNov > 11.50565 AND IF CoverBal3 = 1 then GOOD RULE 3: IF RevtoPayNov > 11.50565 AND IF CoverBal3 = 0 AND IF OpentoBuyDec > 5.351905 then GOOD RULE 4: IF RevtoPayNov > 11.50565 AND IF CoverBal3 = 0 AND IF OpentoBuyDec ≤ 5.351905 AND IF NumPurchDec ≤ 2.64916 then BAD RULE 5 ELSE GOOD

FUZZY MODEL: RULE 1: RULE 2:

Binary and categorical data are not affected by the fuzzy option in See5. They are considered already “fuzzified” with several possible values and corresponding membership function of 1 and 0. Models run with initial data (numeric and categorical scales) in original crisp form, and using See5’s fuzzification were obtained. There were 15 runs averaged for each pruning level, and 20 runs averaged for each minimum case level. The overall line is

An Experiment with Fuzzy Sets in Data Mining

465

based on all 60 runs. The average number of crisp rules was 9.4 (fuzzy rules were the same, with different limits). Crisp total error averaged 488.7, while fuzzy total error averaged 487.2. The number of rules responded to changes in pruning rates and minimum case settings as expected (the tie between 20 percent and 30 percent pruning rates can be attributed to data sampling chance). There were no clear patterns in error rates by treatment. Fuzzy models were noticeably different from crisp models in that they had higher error rates for bad cases, with corresponding improvement in error in the good cases. The overall error was tested by t-test, and the only significant differences found were that the fuzzy models had significantly greater bad error than the crisp models, and significantly less cheap error. The fuzzy models had slightly less overall average error, but given the context of credit cards, bad error is much more important. For data fit, here the models were not significantly different. For application context, the crisp models would clearly be preferred. The generalizable conclusion is not that crisp models are better, only that in this case the fuzzy models were worse, and in general one cannot count on the same results across crisp and fuzzified models. In this case introducing fuzzy thresholds in the rules did not lead to any significant results. The usage of small fuzzy intervals instead of crispy thresholds did not significantly improve the accuracy of the model and did not provide better interpretation of the introduced “interval rules.” On the other hand, crisp data was not significantly better than the fuzzy data. The same tests were conducted with presenting relevant binary and categorical variables in an ordinal form. See5 allows stating that the categorical scale is “[ordered]” with the presented order of attribute values corresponding to the order of final classes. The order is not derived from the data but is introduced by the user as a pre-processing step in rules/tree formation. See5 would not allow locking across data sets, and required different setup for ordinal specification, so we could not control for data set sampling across the tests. Some categorical and/or binary variables such as “Months late” were clearly ordinal and were marked as ordinal for this experiment. Categorical variables with no clear ordinal qualities such as “State,” were left nominal. Crisp rules averaged 7.0, with total error of 487. Fuzzy total error was 482. The number of rules clearly dropped. Expected response of number of rules to pruning and minimum case settings behaved as expected, with the one anomaly at 20 percent pruning, again explainable by the small sample size. Total error rates within ordinal model were similar to the nominal case in the first set of runs, with fuzzy model total error rates showing up as slightly significant (0.086 error probability) in the ordinal models. Comparing nominal and ordinal models, the number of rules was significantly lower for ordinal models (0.010 error probability.) There were no significances in errors across the two sets except for total error (ordinal models had slightly significantly lower total errors, with 0.087 error probability.) This supports our previous finding that using ordinal scales where appropriate lead to a set of more interesting rules without loss in accuracy [6]. The data was categorized into 3 categories for each continuous variable. This in itself is another form of fuzzification. Twenty five variables were selected based upon the typical winnowing results of the original data. The same tests were conducted,

466

D.L. Olson, H. Moshkovich, and A. Mechitov

although fuzzification was not used since See5 would have no difference in applying its fuzzification routine (we did run as a check, but results were always identical). Average number of rules was just over 6, with average total error 495.7. Results were much worse than the results obtained in prior runs with continuous and ordinal data. That is clearly because in the third set of runs, data was categorized manually, while in the prior two runs See5 software set the categorical limits. The second set of runs involved data converted to fuzzy form by the See5 software. These are two different ways to obtain fuzzy categories. Clearly the software can select cutoff limits that will outperform ad hoc manual cutoffs.

3 Fuzzy Sets and Ordinal Classification Task Previous experiments showed very modest improvements in the rule set derived from introducing of fuzzy intervals instead of crisp thresholds for continuous scales using SEE5. Interpretation of the modified rules was not “more friendly” or “more logical.” Using stable data intervals was in general slightly more robust than using crisp thresholds. Considering ordinal properties of some categorical/binary attributes led to a better rule set although this did not change the fuzzy intervals for the continuous scales. This supports our previous findings [6]. One of the more useful aspects of fuzzy logic may be the orientation on the partition of continuous scales into a pre-set number of “linguistic summaries”[12]. In [1] this approach is used to form fuzzy rules in a classification task. The main idea of the method is to use a set of pre-defined linguistic terms for attributes with continuous scales (e.g., “Young”, “Middle”, “Old” for an attribute “Age” measured continuously). In this approach. the traditional triangular fuzzy number is calculated for each instance of age in the training data set, e.g. age 23 is presented in ‘Young” with a 0.85 membership function and in “Middle” with a 0.15 membership function (0 in “Old”). Thus the rewritten data set is used to mine interesting IF-THEN rules using linguistic terms. One of the advantages of the proposed approach stressed by the authors is the ability of the mining method to produce rules useful for the user. In [3], the method was used to mine a database for direct marketing campaign of a charitable organization. In this case the domain expert defined appropriate uniform linguistic terms for quantitative attributes. For example, an attribute reflecting the average amount of donation (AVGEVER) was fuzzified into “Very low” (0 to $300), “Low” ($100 to $500), “Medium” ($300 to $700), “High” ($500 to $900) and “Very High” (over $700). The analogous scale for frequency of donations (FREQEVER) was presented as follows: “Very low” (0 to 3), “Low” (1 to 5), “Medium” (3 to 7), “High” (5 to 9) and “Very High” (over 7). Triangular fuzzy numbers were derived from these settings for rule mining. The attribute to be predicted was called “Response to the direct mailing” and included two possible values “Yes” and “No.” The database included 93 attributes, 44 having continuous scales. Although the application of the method produced a huge number of rules (31,865) with relatively low classification accuracy (about 64%), the authors argued that for a task of such complexity the selection of several useful rules by the user of the results was enough to prove the usefulness of the process. The presented rules found useful by the user were presented as follows:

An Experiment with Fuzzy Sets in Data Mining

467

Rule 1: IF a donor was enrolled in any donor activity in the past (ENROLL=YES), THEN he/she will have RESPONSE=YES Rule 2: IF a donor was enrolled in any donor activity in the past (ENROLL=YES) AND did not attend it (ATTENDED=NO), THEN he/she will have RESPONSE=YES Rule 3: IF FREQEVER = MEDIUM, THEN RESPONSE=YES Rule 4: IF FREQEVER = HIGH, THEN RESPONSE=YES Rule 5: IF FREQEVER = VERY HIGH, THEN RESPONSE=YES We infer two conclusions based on these results. First, if obvious ordinal dependences between final classes of RESPONSE (YES/NO) and such attributes as ENROLL, ATTENDED, and FREQEVER were taken into account the five rules could be collapsed into two without any loss of accuracy and with higher levels for measures of support and confidence: rule 1 and a modified rule 3 in the following format “IF FREQEVER is at least MEDIUM, THEN RESPONSE=YES.” Second, although presented rules are “user friendly” and easily understandable, they are not as easily applicable. Overlapping scales for FREQEVER makes it difficult for the user to apply the rules directly. It is necessary to carry out one more step - agree on the number where “medium” frequency starts (if we use initial database) or a level of a membership function to use in selecting “medium” frequency if we use the “rewritten” dataset. The assigned interval of 3 to 5 evidently includes “High” frequency (which does not bother us) but also includes “Low” frequency” which we possibly would not like to include into our mailing list. As a result a convenient approach for expressing continuous scales with overlapping intervals at the preprocessing stage may be not so convenient in applying simple rules. This presentation of the ordinal classification task allows use of this knowledge to make some additional conclusions about the quality of the training set of objects. Ordinal classification allows introduction of the notion of the consistency of the training set as well as completeness of the training set. In the case of the ordinal classification task quality of consistency in a classification (the same quality objects should belong to the same class) can be essentially extended: all objects with higher quality among attributes should belong to a class at least as good as objects with lower quality. This condition can be easily expressed as follows: if Ai(x) ≥ Ai(y) for each i=1, 2, …, p, then C(x) ≥ C(y). We can also try to evaluate representativeness of the training set by forming all possible objects in U (we can do that as we have a finite number of attributes with a small finite number of values in their scales) and check on the proportion of them presented in the training set. It is evident that the smaller this proportion the less discriminating power we’ll have for the new cases. We can also express the resulting rules in a more summarized form by lower and upper border instances for each class [6]. Advantages of using ordinal scales in an ordinal classification task do not lessen advantages of appropriate fuzzy set techniques. Fuzzy approaches allow softening strict limitations of ordinal scales in some cases and provides a richer environment for data mining techniques. On the other hand, ordinal dependences represent essential domain knowledge which should be incorporated as much as possible into the mining process. In some cases the overlapping areas of attribute scales may be resolved by introducing additional linguistic ordinal levels. For example, we can introduce an

468

D.L. Olson, H. Moshkovich, and A. Mechitov

ordinal scale for age with the following levels: “Young” (less than 30), “Between young and medium” (30 to 40), “Medium” (40 to 50), “Between medium and old” (50 to 60) and “Old” (over 60). Though it will increase the dimensionality of the problem, it would provide crisp intervals for the resulting rules. Ordinal scales and ordinal dependences are easily understood by humans and are attractive in rules and explanations. These qualities should be especially beneficial in fuzzy approaches to classification problems with ordered classes and “linguistic summaries” in the discretization process. The importance of ordinal scales for data mining is evidenced by appearance of this option in many established mining techniques. See5 includes the variant of ordinal scales in the problem description [9].

4 Conclusions Fuzzy representation is a very suitable means for humans to express themselves. Many important business applications of data mining are appropriately dealt with by fuzzy representation of uncertainty. We have reviewed a number of ways in which fuzzy sets and related theories have been implemented in data mining. The ways in which these theories are applied to various data mining problems will continue to grow. Ordinal data is stronger than nominal data. There is extra knowledge in knowing if a greater value is preferable to a lesser value (or vice versa). This extra information can be implemented in decision tree models, and our results provide preliminary support to the idea that they might strengthen the predictive power of data mining models. Our contention is that fuzzy representations better represent what humans mean. Our brief experiment was focused on how much accuracy was lost by using fuzzy representation in one application – classification rules applied to credit applications. While we expected less accuracy, we found that the fuzzy models (as applied by See5 adjusting rule limits) usually actually were more accurate. Models applied to categorical data as a means of fuzzification turned out less accurate in our small sample. While this obviously cannot be generalized, we think that there is a logical explanation. While fuzzification will not be expected to yield better fit to training data, the models obtained by using fuzzification will likely be more robust, which is reflected in potentially equal if not better fit on test data. The results of these preliminary experiments indicate that implementing various forms of fuzzy analysis will not necessarily lead to reduction in classification accuracy.

References 1. Au, W-H, Keith C. C. Chan: Classification with Degree of Membership: A Fuzzy Approach. ICDM (2001): 35-42 2. David, B. A. (1992): Automated generation of symbolic multiattribute ordinal knowledgebased DSSs: Methodology and applications. Decision Sciences, 23(6), 157-1372 3. Chan, Keith C. C., Wai-Ho Au, Berry Choi: Mining Fuzzy Rules in A Donor Database for Direct Marketing by a Charitable Organization. IEEE ICCI (2002): 239-246

An Experiment with Fuzzy Sets in Data Mining

469

4. Dubois, D., E. Hullermeier, H. Prade: A Systematic Approach to the Assessment of Fuzzy Association Rules. Data Mining and Knowledge Discovery, (2006), July, 1-26 5. Larichev, O.I., Moshkovich, H.M. (1994): An approach to ordinal classification problems. International Trans. on Operations Research, 82, 503-521 6. Moshkovich H.M., Mechitov A.I., Olson, D.: Rule Induction in Data Mining: Effect of Ordinal Scales. Expert Systems with Applications Journal, (2002), 22, 303-311 7. Pawlak, Z.: Rough set, International Journal of Computer and Information Sciences. (1982), 341-356 8. Pearl, J.: Probabilistic reasoning in intelligent systems, Networks of Plausible inference, Morgan Kaufmann, San Mateo,CA (1988) 9. See5 - http://www.rulequest.com 10. Shi, Y., Peng, Y., Kou, G., Chen, Z.: Classifying credit card accounts for business intelligence and decision making: A multiple-criteria quadratic programming approach, International Journal of Information Technology & Decision Making 4:4 December (2005), 581-599 11. Slowinski, R. (1995): Rough set approach to decision analysis. AI Expert,19-25 12. Yager, R.R.: On Linguistic Summaries of Data, in G. Piatetsky-Shapiro and W.J. Frawley 9Eds.) Knowledge Discovery in Databases, Mento Park, CA: AAAI/MIT Press, (1991), 347-363 13. Zadeh, L.A.: Fuzzy sets, Information and Control 8 (1965), 338-356 14. Zhao, K.-G.: Set pair analysis – a new concept and new systematic approach. Proceedings of national system theory and regional analysis conference, Baotou (1989) (In Chinese) 15. Zhao, K.-G.: Set pair analysis and its preliminary application, Zhejiang Science and Technology Press (2000) (In Chinese)

An Application of Component-Wise Iterative Optimization to Feed-Forward Neural Networks Yachen Lin Fidelity National Information Services, Inc. 11601 Roosevelt Blvd-TA76 Saint Petersburg, FL 33716 [email protected]

Abstract. Component-wise Iterative Optimization (CIO) is a method of dealing with a large data in the OLAP applications, which can be treated as the enhancement of the traditional batch version methods such as least squares. The salient feature of the method is to process transactions one by one, optimizes estimates iteratively for each parameter over the given objective function, and update models on the fly. A new learning algorithm can be proposed when applying CIO to feed-forward neural networks with a single hidden layer. It incorporates the internal structure of feed-forward neural networks with a single hidden layer by applying the algorithm CIO in closed-form expressions to update weights between the output layer and the hidden layer. Its optimally computational property is a natural consequence inherited from the property of the algorithm CIO and is also demonstrated in an illustrative example.

1 Introduction In recent years, the development of technology has paved the way for the industry to use more sophisticated analytics for making business decisions in on-line analytical processing (OLAP). In the check and credit card processing business, for example, the advanced artificial intelligence is widely used to authorize transactions. This is mainly due to the fact that the fraud activities nowadays are more and more aggressive than ever before. Once certain forms of fraud were identified or caught by the industry, a new form will be intercepted in a very short period of time. People are convinced that fighting fraud requires a non-stopping effort, i.e. constantly updating fraud pattern recognition algorithms and timely incorporating new fraud patterns in the algorithm development process. The traditional method of least squares presents a challenge for updating the model on the fly of transactions in both a general linear model and a general non-linear model. This challenge is mainly due to the matrix manipulation in the implementation of the computation. In dealing with the challenge, both industry and academic researchers have made substantial efforts. The most successful stories, however, are for the linear case only and mainly for the improvement of computational properties of least squares. Since the method known as recursive least squares - RLS was derived; many variants of Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 470–477, 2007. © Springer-Verlag Berlin Heidelberg 2007

An Application of Component-Wise Iterative Optimization

471

RLS have been proposed for a variety of applications. A method so called a sliding window RLS was discussed in many papers such as [2] and [3]. By applying QR decomposition, U-D factorization, and singular value decomposition (SVD), more computationally robust implementations of RLS have been discussed in papers such as [1]. Certainly, these researches have substantially increased the computational efficiency of RLS, of cause LS algorithms, but they are limited for the linear case only. Furthermore, the core ingredient in the computation of the algorithm of RLS and its variants are still a matrix version, although some matrices are more re-usable this time in the implementation. In view of the difficulties that traditional least squares have when updating models on the fly of transactions, a new procedure – Component-wise Iterative Optimization (CIO) was proposed in [10]. Using the new method of CIO, updating models on the fly of transactions becomes straightforward for both linear and non-linear cases. More importantly, the method itself yields an optimal solution with the objective function of sum of squares, in particular, least square estimates when a general linear model is used. The method CIO can be described as follows. Let X be a new pattern, F be the objective function, and

E = (e1( 0) ,", e (p0) ) t ∈ Θ p be the initial estimates, where Θ p is the domain of the parameter vector. Given the notation below

e 1( 1 ) = Arg _ OP F ( e 1 , e 2( 0 ) , " , e e1 ∈ Θ

(0 ) p

, X ) ,

e1(1) is an optimal solution for the parameter of e1 over the objective function F given the sample pattern of X and e2 , …, e p being held fixed.

then

With the above given notations, the method CIO can be described by the following procedure. Given the initial estimates of E CIO updates the estimates in Step 1. Compute e

(1 ) 1

= (e1( 0) ,", e (p0) ) t ∈ Θ p , the method of

p steps below:

= Arg _ OP F ( e 1 , e 2( 0 ) , " , e (p0 ) , X ) e1 ∈ Θ

Step 2. Compute e 2(1) = Arg _ OP F ( e1(1) , e 2 , e 3( 0 ) " , e (p0 ) , X ) by substi-

e2 ∈ Θ

(1)

(0)

tuting e1 for e1 … Step p. Compute e (p1) = Arg _ OP F ( e1(1) ,..., e P(1−) 1 , e p , X ) by substituting

ep ∈ Θ

e

(1) k

for

e , k = 1, 2,... p − 1 . (0) k

After these steps, the initial estimates

(e1(1) ,", e (p1) ) t .

(e1( 0) ,", e (p0) ) t can be updated by

472

Y. Lin

The idea of CIO is no stranger to one who is familiar with the Gibbs sampler by [6]. It is no harm or may be easier to understand the procedure of CIO in terms of the non-Bayesian version of Gibbs sampling. The difference is that CIO generates optimal estimates for a given model and the Gibbs sampler or MCMC generates samples from some complex probability distribution. It has been shown in [10] that the procedure of CIO converges. How can the algorithm CIO be applied in the neural network field? It is well known that a useful learning algorithm is developed for generating optimal estimates based on specific neural network framework or structure. Different forms and structures of f represent the different types of neural networks. For example, the architecture of the feed-forward networks with a single hidden layer and one output can be sketched in the following figure:

+1

+1

x1 x2

y

xp

However, too many learning algorithms proposed in the literatures were just ad hoc in nature and their reliability and generalization were often only demonstrated on applications for limited empirical studies or simulation exercises. Such analyses usually emphasize on a few successful cases of applications and did not necessarily establish a solid base for more general inferences. In the study of this kind, much theoretical justification is needed. In the present study, a useful learning algorithm can be derived from a well established theory of the algorithm CIO. To establish the link between CIO and neural networks, we can look at any types of neural networks in a view of a functional process. A common representation of the processing flow based on neural networks can be written as follows: (1.1) y = f ( x i j , w j , j = 0 , 1 ,..., p ) + ε i

i

where

ε i is an error random term, i = 1,2, " , n , and w = ( w0 , w1 ,", w p ) t

is the

parameter vector. Information first is summarized into α

α i = (α 0i ,..., α pi ) t ∈ Θ ⊆ R p +1 activated

by

a

t

X i , where X i = (1, x1i ,..., x pi ) t , and

α = [α1 , α 2 ,..., α m ] . φ in the

and

function

Secondly it is way

of

Φ(α X i ) = (1, φ (α X i ),..., φ (α X i )) . Then it is further summarized in the t

t 1

t 1

t

An Application of Component-Wise Iterative Optimization

output node by applying new weights to the activation function, i.e. β where β

t

473

Φ (α t X i ) ,

= ( β 0 , β1 ,..., β m )t ∈ R m+1 . Thus, f = β t Φ(α t X i ) .

From the above discussion, we can see that feed-forward neural network with a single hidden layer fit into the general modeling frame work of (1.1) very well. Only in this setting, the targeted or response variable Y can be expressed by the function

f = β t Φ (α t X i ). Therefore, one can image that the algorithm CIO can be applied to feed-forward neural networks with a single hidden layer because the function

f = β t Φ(α t X i ) is a special case of (1.1). The rest of the paper is structured in the following way: In the next section, we discuss an application of CIO to feed-forward neural networks with a single hidden layer. Certainly, a new learning algorithm originated from the application of CIO will be discussed in a relatively detailed fashion there. At the final, an illustration example by using the new learning algorithm is given.

2 An Application of CIO to Feed-Forward Neural Networks Training neural networks to reveal the proper pattern in a data set is a non-trivial task. The performance of a network is often highly associated with the effectiveness of a training algorithm. The well-known back-propagation algorithm [12] is a popular approach to train a multi-layer perceptron, i.e. feed-forward networks by minimizing the square errors. Some of its properties have been studied through a number of applications. With the development of high power computing equipment, many alternative algorithms were proposed such as the development of second-order learning algorithm and classical approaches of Gauss-Newton and Newton-Raphson. [8] gave a learning algorithm by using the projection pursuit technique for optimizing one node at a time. The approach was further developed in [7]. Some other existing algorithms in optimization approaches were also used. The comparisons of these algorithms have been conducted for some cases in [13]. [9] proposed a learning algorithm by using Hessian matrix in the update recursive formula - a variation of the second-order learning algorithm. In training feed-forward neural networks with a single hidden layer, its special structure for the processing flow can be exploited. From the discussion in section 1, we know that the output can be expressed in the following model:

yi = β t Φ (α t X i ) + ε i ,

(2.1)

Φ(α t X i ) = (1, φ (α 1t X i ),..., φ (α 1t X i )) t and ε i is an error random term, i = 1,2, " , n . If the objective function of the mean squared errors is given, then

where

training the neural network is equivalent to finding least square estimates of the weight parameters.

474

Y. Lin

Since (2.1) is only a special form of (1.1), given the objective function of the mean squared errors, we can apply the algorithm CIO to train the network. There are two options: (a) directly to apply CIO to (2.1), and (b) only apply CIO to β

= ( β 0 , β1 ,..., β m )t ∈ R m+1 , the weight parameters vector between the hidden

layer and the output layer. Considering the condition of the theorem for CIO, we only discuss option (b) in this paper. Simple computations for (2.1) based on CIO lead to Thus, updating for the estimates of tion that given

g ' ( β (j k ) (i )) = φ (α tj X i ).

β = ( β 0 , β1 ,..., β m )t ∈ R m+1

, under the condi-

α = [α1 , α 2 ,..., α m ], α i = (α 0i ,..., α pi ) t ∈ Θ ⊆ R p +1 , can be done

simply following the procedure below. Step 0. Initial value: Choose an initial value of

β ( 0) = ( β 0( 0) , β 1( 0) ,", β p( 0) ) t

for β randomly or by certain optimal rules such as least squares.

β (1) = ( β 0(1) , β 1(1) ," , β p(1) ) t :

Step 1. Compute Compute

β 0(1) : Given a sample pattern ( y i , xi 1 ,..., xi p ),

the equation

yi − g ( β 0 ) = 0, and denoted by β 0(1) (i ).

sample patterns

Repeat n times for all

( y i , xi 1 ,..., xi p ), i = 1,..., n , then let

β 0(1) = Compute

we can find the solution of

β 1(1) :

1 n ∑ β 0(1) (i). n i

β 0(1)

First, substitute

for

β 0( 0)

in function of

g ( β 1 ), where

g ( β 1 ) = f ( β 1 , given xi j , β 0(1) , β k( 0) , k ≠ 0, 1, k = 0,1,..., p) . Then, solve the equation ple

y i − g ( β 1 ) = 0, and the solution is denoted by β 1(1) (i )

pattern

for given sam-

( y i , xi 1 ,..., xi p ). Repeat n times for all sample patterns

( y i , xi 1 ,..., xi p ), i = 1,..., n , then let

β

(φ (α X )) =∑ ∑ (φ (α X ) ) i =1

2

t 1

n

(1) 1

n

1

i

t 1

2

β1(1) (i ).

i

(p) Compute the last component β p : By the p-1 steps above, components of (1)

β are taken as β l(1) , l = 0,1,..., p − 1 in the equation

function of

g ( β p ) . Then, solve the

yi − g ( β p ) = 0, and the solution is denoted by β p(1) (i )

for given

An Application of Component-Wise Iterative Optimization

sample patterns ( y i , xi 1 ,..., x i p ), i

β

(φ (α X )) =∑ ∑ (φ (α X ) ) i =1

Step k. Compute get

= 1,..., n , and then let 2

t p

n

(1) p

475

n

1

i

t p

2

β p(1) (i ).

i

β ( k ) = ( β 0( k ) , β 1( k ) ,", β p( k ) ) t : Repeat the Step 1 k times, then

β (k ) .

Let us denote the above procedure by CIO(β ; α), which means that given α, update β by CIO(β ; α). The other group of weight parameters in the network

α = [α1 , α 2 ,..., α m ], α i = (α 0i ,..., α pi ) t ∈ Θ ⊆ R p +1

can be updated by one of

many commonly used iterative procedures such as Gauss-Newton, Newton-Raphson, and Conjugate Gradient, denoted by CUIP(α; β), which means that given β, update α by CUIP(α; β). Given the above notations, let Φ be the activation function, x be input features, and y be the response, the following figure shows the updating procedure. Algorithm by Neural-CIO (α, β, Φ, Χ, Υ) 1. α(old) ← α( 0 ) β (old) ← β ( 0 ) 2. 3. SSR ← Criterion(α( 0 ), β ( 0 ), Φ, Χ, Υ) 4. While SSR > ε do; β (new) ← CIO(β (old) ; α(old) ) α(new) ← CUIP( α(old) ; β (new) ) SSR ← Criterion(α( new), β ( new ), Φ, Χ, Υ) 5. Return α( new), β ( new ), SSR The advantage of function CIO(β (old) ; . ) over the other available learning algorithm is its closed form, i.e. β (new) CIO(β (old) ; . ). To update the weight parameter vector β, we do not need to apply iterations while updating by the closed form. Therefore, it is more efficient computationally. In the section, we will show this computational efficiency by a numeric example. For the function Criterion(α( new), β ( new ), Φ, Χ, Υ), it can be of many forms such like the mean squared error, the number of iterations, or the lackness of training progress.

3 An Illustrative Example This section gives a numeric example of a classification problem using Fisher’s famous Iris data. A comprehensive study on applying neural networks to this data set was given in [11]. In the following, both algorithms Neural-CIO and Backpropagation are implemented and compared over the data in the framework of three

476

Y. Lin

feed-forward neural networks with a single hidden layer, 4-2-1, 4-3-1, 4-4-1, i.e. 2, 3, and 4 hidden nodes, respectively. For the data, a sample of three records of the original data can be seen in the following table. Table 1. Three records of Fisher’s Iris data

Sepal Length 5.1 7.0 6.3

Sepal width 3.5 3.2 3.3

Petal length 1.4 4.7 6.0

Petal width 0.2 1.4 2.5

Class Setosa Versicolor Verginica

All measurements are in centimeters. It is well-known that the class Setosa can be linearly separated from the other two and the other two can not be separated linearly. Thus, we only apply the algorithms to the network to separate these two classes. The data is further transformed using the following formulas and then each record is assigned a class number either 0.0999 or 0.0001. Sepal length == (Sepal length - 4.5) / 3.5; Sepal width == (Sepal width - 2.0) / 1.5; Petal length == (Pepal length - 3.0) / 3.5; Petal width == (Pepal width - 1.4) / 1.5; Table 2. Two records of transformed Fisher’s Iris data

Sepal Length 0.7143 0.5143

Sepal width 0.8000 0.8667

Petal length 0.4857 0.4286

Petal width 0.0000 0.7333

Class 0.0999 0.0001

Note: There are total 100 records in the data set and 50 for each class.

In the training process, for both algorithms, the stopping rule is chosen to be the mean squared error less than a pre-assigned number. The activation function is taken the form of φ ( x ) = (1 + e ) . The training results and performance are summarized in the table in the next page. The results in the table clearly shows the advantages that the new learning algorithm incorporates the internal structure of feed-forward neural networks with a single hidden layer by applying the algorithm CIO in closed-form expressions to update weights between the output layer and the hidden layer. Its optimally computational property is a natural consequence inherited from the property of the algorithm CIO and this point has been further verified in the above illustrative example. − x −1

Table 3. Comparison between Neural-CIO and Back-propagation

Structure 4-2-1 4-3-1 4-4-1

Error 0.0002 0.0002 0.0002

#Misclassification CIO Back 6 6 5 6 5 6

# Iteration CPU Time(second) CIO Back CIO Back 2433 9092 68 93 1935 8665 96 120 811 6120 60 110

Note: CIO means Neural-CIO and Back means Back-propagation.

An Application of Component-Wise Iterative Optimization

477

References 1. Baykal, B., Constantinids, A.: Sliding window adaptive fast QR and QR-lattice algorithm. IEEE Signal Process 46 (11), (1998) 2877-2887 2. Belge, M., Miller, E.: A sliding window RLS-like adaptive algorithm for filtering alphastable noise. IEEE Signal Process Letter 7 (4), (2000) 86-89 3. Choi, B., Bien, Z.: Sliding-windowed weighted recursive least-squares method for parameter estimation. Electron Letter 25 (20), (1989) 1381-1382 4. Fisher, R: The use of multiple measurements in taxonomic problems, Ann. Eugencis 7, Pt II, (1939). 197-188 5. Fletcher, R.: Practical Methods of Optimization Vol I: Unconstrained Optimization, Comput. J. 6, (1980). 163 – 168 6. Geman, S., Geman, D.: Stochastic relaxation, Gibbs distribution and Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence 6, (1984) 721-741 7. Hwang, J. N., Lay, S. R., Maechler, M., and Martin, D. M.: Regression modeling in backpropagation and projection pursuit learning, IEEE Trans. on Neural networks vol. 5, 3, (1994)342 - 353 8. Jones, L. K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training, Ann. Statist. Vol. 20 (1992) 608- 613 9. Karayiannis, N. B., Venetsanopoulos, A. N.: Artificial Neural Networks: Learning algorithms, Performance evaluation, and Applications KLUWER ACADEMIC PUBLISHERS(1993). 10. Lin, Y.: Component-wise Iterative Optimization for Large Data, submitted (2006) 11. Lin, Y.: Feed-forward Neural Networks – Learning algorithms, Statistical properties, and Applications, Ph.D. Dissertation, Syracuse University (1996) 12. Rumelhart, D. E., Hinton, E. G., Williams, R. J.: Learning internal representations by error propagation, Parallel Distributed Processing. Chap. 8, MIT, Cambridge, Mass. (1986) 13. Watrous, R. L.: Learning algorithms for connectionist networks: applied gradient methods of nonlinear optimization, IEEE First Int. Conf. Neural Networks, San Diego, (1987) II 619-627

ERM-POT Method for Quantifying Operational Risk for Chinese Commercial Banks Fanjun Meng1 , Jianping Li2, , and Lijun Gao3 Economics school, Renmin University of China, Beijing 100872, P.R. China [email protected] 2 Institute of Policy & Management, Chinese Academy of Sciences, Beijing 100080, P.R. China [email protected] 3 Management School, Shandong University of Finance, Jinan 250014, P.R. China [email protected] 1

Abstract. Operational risk has become increasingly important topics for Chinese Commercial Banks in recent years. Considering the huge operational losses, Extreme value theory (EVT) has been recognized as a useful tool in analyzing such data. In this paper, we presented an ERMPOT (Exponential Regression Model and the Peaks-Over-Threshold) method to measure the operational risk. The ERM-POT method can lead to bias-corrected estimators and techniques for optimal threshold selections. And the experiment results show that the method is reasonable. Keywords: operational risk; EVT; POT; ERM; VaR.

1

Introduction

Basel II for banking mandates a focus on operational risk. In the Basel framework, operational risk is defined as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. The operational risk is one of the most important risks for Chinese commercial banks, and brings huge losses to Chinese commercial banks in recent years. Considering the size of these events and their unsettling impact on the financial community as well as the growing likelihood of operational risk losses, it is very important for analysis, modelling and prediction of rare but dangerous extreme events. Quantitative approaches should be achieved. EVT has developed very rapidly over the past two decades. Also it has been recognized as a useful set of probabilistic and statistical tools for the modelling of rare events and its impact on insurance, finance and quantitative risk management is well recognized [2]. The distribution of operational risk is heavy-tailed 



This research has been partially supported by a grant from National Natural Science Foundation of China (# 70531040) and the President Fund of Institute of Policy and Management, Chinese Academy of Sciences (CASIPM) (0600281J01). The Corresponding author.

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 478–481, 2007. c Springer-Verlag Berlin Heidelberg 2007 

ERM-POT Method for Quantifying Operational Risk

479

and high-kurtosis. Considering the nature of the operational risk and their unsettling impact, EVT can play a major role in analyzing such data. A fast growing literature exists. For details, see references [1],[2],[3],[4] . . . In this paper, we presented an ERM-POT method to measure the operational risk. In POT method, how to choose the threshold in an optimal way is a delicate matter. A balance between bias and variance has to be made. In general, Hillplot and mean excess function (MEF) can be used to choose the threshold. However, selection of the threshold by the two approaches may produce biased estimates because the two are empirical. The ERM-POT method can lead to bias-corrected estimators and techniques for optimal threshold selections. With ERM-POT, the optimal threshold is selected, and then VaR is obtained. The paper is organized as follows: In section 2, we give a brief view of the ERM-POT method. Section 3 is the experiment with the ERM-POT. The last part concludes the paper.

2

The ERM-POT Method

Given a threshold u and a heavy-tailed sample X1 , ......, Xn , denote Nu be the number of exceeding observations Xi1 , ......XiNu and denote the excesses Yj = Xij − u ≥ 0. The distribution of excess values of X over threshold u is defined by F (x + u) − F (u) Fu (x) := P (X − u ≤ x|X > u) = , (1) 1 − F (u) For a sufficiently high threshold u, Fu (x) can be approximated by the Generalized Pareto Distribution (GPD) [7]:  −1/γ , γ = 0 1 − (1 + γx σ ) , (2) Gγ,σ (x) = 1 − exp(−x/σ), γ = 0 where Gγ,u,σ (x) := Gγ,σ (x − u) with the shape parameter γ and the scale parameter σ. Fit a GPD to the excesses Y1 , ......, YNu to obtain estimates γˆ and σ ˆ with Maximum Likelihood (ML). For x > u, from equation(1) and equation (2), We estimate the tail of F with x − u −1/ˆγ Nu (1 + γˆ ) Fˆ (x) = 1 − , n σ ˆ where Fn (u) is the empirical distribution function in u : Fn (u) = 1 − Nu /n. For a given probability p, inverting Fˆ (x) then yields the following estimator for high quantiles above the threshold u [7]: xˆp = u + σ ˆ VaR = u + σ ˆ

ˆ u γ (N np ) −1 γ ˆ

ˆ u γ (N np ) − 1

γˆ

, then

.

(3)

Next we choose the threshold u. For simplicity, denote k = Nu . ERM is proposed for log-spacings of order statistics X1,n ≤ X2,n ≤ ... ≤ Xn,n [6,7]: Zj,k = j log(Xn−j+1,n /Xn−j,n) ∼ (ξ + bn,k (

j −ρ ) )fj,k , 1 ≤ j ≤ k . k+1

(4)

480

F. Meng, J. Li, and L. Gao

with {fj,k, 1 ≤ j ≤ k} a pure random sample from the standard exponential distribution, shape parameter ξ, real constant ρ ≤ 0 and  rate function b. The k Hill estimator is given for k ∈ {1, ..., n − 1} by Hk,n = k1 j=1 (log Xn−j+1,n − log Xn−k,n ) . We choose the threshold by the asymptotic mean squared error (AMSE) of the Hill estimator [7], given by AMSE(Hk,n ) = Avar(Hk,n ) + Abias2 (Hk,n ) =

bn,k 2 ξ2 +( ) . k 1−ρ

(5)

Similar to the adaptive threshold selection method [6], in the ERM we calˆ bn,k ˆ , ρˆ with ML for each k = {3, ......, n − 1}. Determine culate the estimates ξ, AMSE(Hk,n ) for each k = {3, ......, n − 1} and then determine the optimal k with kopt = argmink {AMSE}. Thus we choose Xn−kopt ,n as the threshold.

3

Experiment

3.1

Data Set

For obvious reasons, operational risk data are hard to come by. This is to some extent true for Chinese commercial banks. We collect 204 operational losses of more than 20 banks from public media. The time of the database ranged from 1994 to 2006. We use quantile-quantile plot (QQ-plot) to infer the tail behavior of observed losses. From Fig. 1, the 204 public reported operational loss data are heavy-tailed. (units: Ten thousand RMB)

2 0

1

Exponential Quantiles

3

4

operational risk losses

0

2*10^6

4*10^6

6*10^6

Ordered Data

Fig. 1. QQ plot

The estimators and quantiles are obtained by software S-Plus. 3.2

Results and Analysis

From Table. 1, the threshold is u = 43200 ten thousand RMB. Almost 80% operational losses are less than this, and most extreme values are beyond this. So this threshold calculated sounds reasonable.

ERM-POT Method for Quantifying Operational Risk

481

Table 1. The results of ERM-POT (units: Ten thousand RMB) u

γ

σ

VaR0.95

VaR0.99

VaR0.999

43200

1.505829

116065.6

417740.1

5062954

163319974

Since the shape parameter is used to characterize the tail behavior of a distribution: the lager γ, the heavier the tail. The estimate we obtained γ = 1.505829 > 1 proves that the operational losses for Chinese commercial banks are severely heavy-tailed. At the same time, within the 99.9% confidence interval VaR(= 163319974) excluding the expected losses nearly accounts for 6% to the average assets from 2003 to 2005 for Chinese commercial banks. As a consequence, we should pay much attention on the operational risk, and at the same time useful quantitative and qualitative approaches should be achieved for Chinese commercial banks. At last, comparing with the VaR in Lijun Gao [4], in which VaR = 136328000, we know both results are close to each other.

4

Conclusions

In this paper, we presented an ERM-POT to measure the operational risk of extremely heavy-tailed loss data. Selection of the threshold by Hill plot or MEF may produce biased estimates. The ERM-POT provides a solution to such problem. With ERM-POT, the optimal threshold is selected. and then VaR is obtained. From Table. 1, we know the threshold sounds reasonable and the new method is useful .

References 1. Chavez-Demoulin, V., Davison, A.: Smooth extremal models in finance. The Journal of the Royal Statistical Society, series C 54(1) (2004) 183-199 2. Chavez-Demoulin, V., Embrechts, P., Neˇslehov´ a: Quantitative models for Operational Risk: Extrems, Dependence and Aggregation. The meeting Implementing an AMA for Operational Risk, Federal Reserve Bank of Boston.(2005) 3. Chernobai, A., Rachev, S., T.: Applying Robust Methods to Operational Risk Modeling. Journal of Operational Risk (Spring 2006) 27-41 4. Lijun Gao: The research on operational risk measurement and capital distribution of Chinese commercial banks, Doctorate thesis.(2006)(in Chinese) 5. Lijun Gao, Jianping Li, Weixuan Xu. Assessment the Operatinal Risk for Chinese Commercial Banks. Lecture Notes in Computer Science 3994. (2006) 501-508 6. Matthys, G. and Beirlant, J. :Adaptive threshold selection in tail index estimation. In Extremes and Integrated Risk Managenent, Risk Books, London.(2000a) 37-49. 7. Matthys, G. and Beirlant, J. : Estimating the extreme value index and high quantiles with exponential regression models. Statistica Sinica 13 (2003) 853-880 8. Neˇslehov´ a, J., Chavez-Demoulin, V., Embrechts, P., :Infinite-mean models and the LDA for operational risk. Journal of Operational Risk (Spring 2006)3-25

Building Behavior Scoring Model Using Genetic Algorithm and Support Vector Machines∗ Defu Zhang1,2, Qingshan Chen1, and Lijun Wei1 1

Department of Computer Science, Xiamen University, Xiamen 361005, China 2 Longtop Group Post-doctoral Research Center, Xiamen, 361005, China [email protected]

Abstract. In the increasingly competitive credit industry, one of the most interesting and challenging problems is how to manage existing customers. Behavior scoring models have been widely used by financial institutions to forecast customer’s future credit performance. In this paper, a hybrid GA+SVM model, which uses genetic algorithm (GA) to search the promising subsets of features and multi-class support vector machines (SVM) to make behavior scoring prediction, is presented. A real life credit data set in a major Chinese commercial bank is selected as the experimental data to compare the classification accuracy rate with other traditional behavior scoring models. The experimental results show that GA+SVM can obtain better performance than other models. Keywords: Behavior Scoring; Feature Selection; Genetic Algorithm; MultiClass Support Vector Machines; Data Mining.

1 Introduction Credit risk evaluation decisions are crucial for financial institutions due to high risks associated with inappropriate credit decisions. It is an even more important task today as financial institutions have been experiencing serious competition during the past few years. The advantage of using behavior scoring models can be described as the benefit from allowing financial institutions to make better decisions in managing existing clients by forecasting their future performance. The decision to be made include what credit limit to assign, whether to market new products to these particular clients, and how to manage the recovery of the debt while the account turns bad. Therefore, new techniques should be developed to help predict credit more accurately. Currently, researchers have developed a lot of methods for behavior scoring, the modern data mining techniques, which have made a significant contribution to the field of information science [1], [2], [3]. At the same time, with the size of databases growing rapidly, data dimensionality reduction becomes another important factor in building a prediction model that is fast, easy to interpret, cost effective, and ∗

This research has been supported by academician start-up fund (Grant No. X01109) and 985 information technology fund (Grant No. 0000-X07204) in Xiamen University.

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 482–485, 2007. © Springer-Verlag Berlin Heidelberg 2007

Building Behavior Scoring Model Using GA and SVM

483

generalizes well to unseen cases. Data reduction is performed via feature selection in our approach. Feature selection is an important issue in building classification systems. There are basically two categories of feature selection algorithms: feature filters and feature wrappers. In this paper we adopt the wrapper model of feature selection which requires two components: a search algorithm that explores the combinatorial space of feature subsets, and one or more criterion functions that evaluate the quality of each subset based directly on the predictive model [4]. GA is used to search through the possible combinations of features. GA is an extremely flexible optimization tool for avoiding local optima as it can start from multiple points. The input features selected by GA are used to train a Multi-Class Support Vector Machines (SVM) that extracts predictive information. The trained SVM is tested on an evaluation set, and the individual is evaluated both on predictive accuracy rate and complexity (number of features). This paper is organized as follows. In Section 2, we show the structure of the GA+SVM model, and describe how GA is combined with SVM. The experimental results are analyzed in Section 3. Conclusions are provided in Section 4.

2 GA+SVM Model for Behavior Scoring Problems Firstly, we will give a short overview of the principles of genetic algorithm and support vector machines. Further details can be found in [5], [6]. In order to use SVM for real-world classification tasks, we should extend typical two-class SVM to solve multiple-class problems. Reference [7] gives a nice overview about ideas of multi-class reduction to binary problems.

Fig. 1. A wrapper model of feature selection (GA+SVM)

Our behavior scoring model is a hybrid model of the GA and SVM procedures, as shown in Fig. 1. In practice, the performance of genetic algorithm depends on a number of factors. Our experiments used the following parameter settings: the population size is 50, the maximum number is 100, the crossover rate is 0.9, and the mutation rate is 0.01. The fitness function has to combine two different criteria described to obtain better performance. In this paper we use Faccuracy and Fcompexity to denote the two criteria.

484

D. Zhang, Q. Chen, and L. Wei

Faccuracy: The purpose of the function is to favor feature sets with a high predictive accuracy rate, SVM takes a selected set of features to learn the patterns and calculates the predict accuracy. The radial basis function (RBF) is used as the basic kernel function of SVM. With selected features, randomly split the training data set, the ratio of Dtrain and Dvalidation is 2:1. In addition, since SVM is a stochastic tool, five iterations of the proposed method are used to avoid the affect of randomized algorithm. And the Faccuracy is an average of five iterations. Fcompexity: This function is aimed at finding parsimonious solution by minimizing the number of selected feature as follows: Fcompexity = 1 - (d-1)/(D-1) .

(1)

Where D is the dimensionality of the full data set, and d is the dimension of the selected feature set. We expect that lower complexity will lead to easier interpretability of solution as well as better generalization. The fitness function of GA can be described as follows: Fitness(x) = Faccuracy(x) + Fcompexity(x) .

(2)

3 Experimental Results A credit card data set provided by a Chinese commercial bank is used to demonstrate the effectiveness of the proposed model. The data set is in recent eighteen months, and includes 599 instances. Each instance contains 17 independent variables. The decision variable is the customer credit: good, bad, and normal credit. The number of good, normal, and bad is 160, 225, and 214 respectively. In this section, GA+SVM is compared with a pure SVM, back-propagation neural network (BPN), Genetic Programming (GP) and logistic regression (LR). The scaling ratio of the training and test data set is 7:3. In order to compare the proposed method with other models, five sub-samples are used to compare the predictive accuracy rate of those models. The predictive accuracy rates of the test data set are shown in Table 1. In the first sample, the feature subset selected by GA is shown in Table 2. Table 1. Predictive accuracy rates of proposed models GA+SVM SVM BPN GP LR

Sample 1 0.8883 0.8771 0.8659 0.8827 0.8492

Sample 2 0.8994 0.8715 0.8676 0.8939 0.8659

Sample 3 0.9162 0.8883 0.8892 0.9106 0.8770

Sample 4 0.8771 0.8492 0.8431 0.8827 0.8436

Sample 5 0.8883 0.8659 0.8724 0.8883 0.8715

Overall 0.8940 0.8704 0.8676 0.8916 0.8614

Table 2. Features selected by GA+SVM in Sample 1 Feature Type Custome’s personal information Custome’s financial information

Selected Features Age, Customer type, Education level Total asset, Average of saving

Building Behavior Scoring Model Using GA and SVM

485

On the basis of the simulated results, we can observe that the classificatory accuracy rate of the GA+SVM is higher than other models. In contrast with other models, we consider that GA+SVM is more suitable for behavior scoring problems for the following reasons. Unlike BPN which is only suited for large data sets, our model can perform well in small data sets [8]. In contrast with the pure SVM, GA+SVM can choose the optimal input feature subset for SVM. In addition, unlike the conventional statistical models which need the assumptions of the data set and attributes, GA+SVM can perform the classification task without this limitation.

4 Conclusions In this paper, we presented a novel hybrid model of GA+SVM for behavior scoring. Building a behavior scoring model involves the problems of the features selection and model identification. We used GA to search for possible combinations of features and SVM to score customer’s behavior. On the basis of the experimental results, we can conclude that GA+SVM obtain higher accuracy in the behavior scoring problems. In future work, we may incorporate other evolutionary algorithms with SVM for feature subset selections. How to select the kernel function, parameters and feature subset simultaneously can be also our future work.

References 1. Chen, S., Liu, X: The contribution of data mining to information science. Journal of Information Science. 30(2004) 550-558 2. West, D.: Neural network credit scoring models. Computers & Operations Research. 27(2000) 1131-52 3. Li, J., Liu, J., Xu, W., Shi. Y.: Support Vector Machines Approach to Credit Assessment. International Conference on Computational Science. Lecture Notes in Computer Science, Vol. 3039. Springer-Verlag, Berlin Heidelberg New York (2004) 4. Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artificial Intelligence. 1(1997) 273-324 5. Michalewicz, Z.: Genetic Algorithms + Data Structures = Evolution Programs. 3rd edn. Springer-Verlag, Berlin Heidelberg New York (1996) 6. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer-Verlag, Berlin Heidelberg New York (1995) 7. Allwein, E., Schapire, R., Singer, Y.: Reducing multiclass to Binary: A Unifying Approach for Margin Classifiers. The Journal of Machine Learning Research. 1 (2001) 113-141 8. Nath, R., Rajagopalan, B., Ryker, R.: Determining the saliency of input variables in neural network classifiers. Computers & Operations Research. 8(1997) 767–773

An Intelligent CRM System for Identifying High-Risk Customers: An Ensemble Data Mining Approach Kin Keung Lai1, Lean Yu1,2, Shouyang Wang2, and Wei Huang1 1

Department of Management Sciences, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong {mskklai,msyulean}@cityu.edu.hk 2 Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100080, China {yulean,sywang,whuang}@amss.ac.cn

Abstract. In this study, we propose an intelligent customer relationship management (CRM) system that uses support vector machine (SVM) ensembles to help enterprise managers effectively manage customer relationship from a risk avoidance perspective. Different from the classical CRM for retaining and targeting profitable customers, the main focus of our proposed CRM system is to identify high-risk customers for avoiding potential loss. Through experiment analysis, we find that the Bayesian-based SVM ensemble data mining model with diverse components and “choose from space” selection strategy show the best performance over the testing samples. Keywords: Customer relationship management, support vector machine, ensemble data mining, high-risk customer identification.

1 Introduction Customer relationship management (CRM) has become more and more important today due to the intensive competitive environment and increasing rate of change in the customer market. Usually, most enterprises are interested in knowing who will respond, activate, purchase, or use their products or services. However, customer risk avoidance and management is also a critical component to maintain profitability in many industries, such as commercial banks and insurance industries. These businesses are concerned with the amount of risk they are taking by accepting someone or a certain corporate as a customer. Sustainability and profitability of these businesses particularly depends on their ability to distinguish faithful customers from bad ones [1-2]. In order to enable these businesses to take either preventive or correct immediate action, it is imperative to satisfy the need for efficient and reliable model that can accurately identify high-risk customers with potential default trend. In such a CRM system that focusing on customer risk analysis, a generic approach is to apply a classification technique on similar data of previous customers — both faithful and delinquent customers — in order to find a relationship between the characteristics and potential default [1-2]. One important ingredient needed to accomplish Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 486–489, 2007. © Springer-Verlag Berlin Heidelberg 2007

An Intelligent CRM System for Identifying High-Risk Customers

487

this goal is to seek an accurate classifier in order to categorize new customers or existing customers as good or bad. In the process of customer classification, data mining techniques, especially classification techniques, play a critical role. Some existing techniques about risk identification can be referred to [1-3]. Recent studies [2-3] found that the unitary data mining technique did not produce consistently good performance because each data mining technique had their own shortcomings and different suitability. Therefore the ensemble data mining technique is an effective way to remedy this drawback. In this study, our aim is to propose a support vector machine (SVM) based ensemble data mining model to identify the high risk customer. The rest of the study is organized as follows. Section 2 describes a four-step SVM ensemble data mining model for CRM to identify high risk customer in detail. In Section 3, we conduct some experiments with a real-world customer dataset. Finally, some conclusions are drawn in Section 4.

2 The SVM-Based Ensemble Data Mining System for CRM As earlier noted, ensembling multiple classification models into an aggregated output has been an effective way to improve the performance of data mining [2]. A definition of effective ensemble classifiers was introduced by Hansen and Salamon [4], who stated: ‘A necessary and sufficient condition for an ensemble of classifiers to be more accurate than any of its individual members is if the classifiers are accurate and diverse.’ An accurate classifier is the one that is well trained and whose error rate is better than a random selection of output classes. Two classifiers are diverse if they make different errors on the same input values. Different from their study [5], which is done by using ANN; our tool is SVM, which is a more robust model than ANN. Generally, the SVM-based ensemble data mining system for high risk customers’ identification comprises the following four main steps: data preparation, single classifier creation, ensemble member selection, and ensemble classifier construction, which are described as follows. The first step of this ensemble data mining system is to prepare input data into a readily available format. The main task of this phase is to collect related data and to perform necessary preprocessing. A systematical research about data preparation for complex data analysis is done by Yu et al. [5]. The second step is to create single classifiers. According to the bias-variancecomplexity trade-off principle [6], an ensemble model consisting of diverse individual models (i.e., base models) with much disagreement is more likely to have a good performance. Therefore, how to generate the diverse model is the key to the construction of an effective ensemble model [3]. For the SVM classifier, there are three main ways for generating diverse base models: (a) utilizing different training data sets; (b) changing the kernel functions in SVM, such as polynomial function and Gaussian function; (c) varying the SVM model parameters, such as parameter C and σ2. Although there are many ways to create diverse base models, the above approaches are not necessarily independent. In order to guarantee the diversity, a selection step is used to pick out some of them, and construct an ensemble classifier for CRM. In order to select diverse classifiers, the subsequent step is to select some independent single models. There are many algorithms, such as principal component

488

K.K. Lai et al.

analysis (PCA) [7], choose the best (CTB) [8] and choose from subspace (CFS) [8] for ensemble member selection. The PCA is used to select a subset of members from candidate members using the maximizing eigenvalue of the error matrix. The idea of CTB is to select the classifier with the best accuracy from candidate members to formulate a subset of all members. The CFS is based on the idea that for each model type, it chooses the model exhibiting the best performance. Readers can refer to [7-8]. After single classifiers are selected, ensemble classifier will be combined in the final step. Typically, majority voting based ensemble strategy [4] and Bayesian-based ensemble strategy [11] are the most popular methods for constructing an accurate ensemble model. Interested readers can refer to [4, 11] for more details.

3 Experimental Analysis In this section a real-world credit dataset is used to test the performance of the SVMbased ensemble data mining model. The dataset used in this study is from the financial service company of England, obtained from accessory CDROM of a book [12]. For space consideration, the data is omitted. For testing purpose, we randomly divide the scaled dataset into two parts: training set with 800 samples, testing set with 1071 samples. Particularly, the total accuracy [1-2] is used to measure the efficiency of classification. For comparison, logistic regression-based (LogR) ensemble and artificial neural network-based (ANN) ensemble are also conducted. Note that LogR ensemble, ANN ensemble and SVM ensemble are constructed with 20 different classifiers respectively. Accordingly different experimental results are reported in Table 1. Table 1. The total accuracy of different ensemble models (%) Ensemble LogR

ANN

SVM

Selection strategy PCA CTB CFS PCA CTB CFS PCA CTB CFS

Ensemble strategy Majority voting ensemble Bayesian-based ensemble 57.65 60.08 58.63 62.29 59.49 60.86 68.63 73.35 69.06 71.24 68.65 72.93 70.75 76.36 71.30 77.68 75.63 87.06

From Table 1, we find the following interesting conclusions. (1) Diversity of individual classifiers can improve the performance of the SVM-based ensemble models; (2) CFS always performs the best among the three strategies of ensemble members’ selection; (3) Bayesian-based ensemble strategy always performs much better than majority voting ensemble strategy. These results and findings also demonstrate the effectiveness of the SVM-based ensemble models relative to other ensemble approaches, such as logit regression ensemble and neural network ensemble.

An Intelligent CRM System for Identifying High-Risk Customers

489

4 Conclusions In this study, we propose an intelligent CRM system that uses SVM ensemble techniques to help enterprise managers effectively manage customer relationship from a risk avoidance perspective. Through experiment analysis, we can easily find that the SVM-based ensemble model performs much better than LogR ensemble and ANN ensemble, indicating that the SVM-based ensemble models can be used as an effective CRM tool to identify high risk customers.

Acknowledgements This work is supported by the grants from the National Natural Science Foundation of China (NSFC No. 70221001, 70601029), Chinese Academy of Sciences (CAS No. 3547600), Academy of Mathematics and Systems Sciences (AMSS No. 3543500) of CAS, and City University of Hong Kong (SRG No. 7001677, 7001806).

References 1. Lai, K.K., Yu, L., Zhou, L.G., Wang, S.Y.: Credit Risk Evaluation with Least Square Support Vector Machine. Lecture Notes in Computer Science 4062 (2006) 490-495 2. Lai, K.K., Yu, L., Wang, S.Y., Zhou, L.G.: Credit Risk Analysis Using a Reliability-based Neural Network Ensemble Model. Lecture Notes in Computer Science 4132 (2006) 682-690 3. Lai, K.K., Yu, L., Huang, W., Wang, S.Y.: A Novel Support Vector Metamodel for Business Risk Identification. Lecture Notes in Artificial Intelligence 4099 (2006) 980-984 4. Hansen, L., Salamon, P.: Neural Network Ensemble. IEEE Transactions on Pattern Analysis and Machine Intelligence 12 (1990) 993-1001 5. Yu, L., Wang, S.Y., Lai, K.K.: A Integrated Data Preparation Scheme for Neural Network Data Analysis. IEEE Transactions on Knowledge and Data Engineering 18 (2006) 217-230 6. Yu, L., Lai, K.K., Wang, S.Y., Huang, W.: A Bias-Variance-Complexity Trade-Off Framework for Complex System Modeling. Lecture Notes in Computer Science 3980 (2006) 518-527 7. Yu, L., Wang, S.Y., Lai, K.K.: A Novel Nonlinear Ensemble Forecasting Model Incorporating GLAR and ANN for Foreign Exchange Rates. Computers & Operations Research 32(10) (2005) 2523-2541 8. Partridge D., Yates, W.B.: Engineering Multiversion Neural-Net Systems. Neural Computation 8 (1996) 869-893 9. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer: New York (1995) 10. Suykens J.A.K., Vandewalle, J.: Least Squares Support Vector Machine Classifiers. Neural Processing Letters 9 (1999) 293-300 11. Xu, L., Krzyzak, A., Suen, C.Y.: Methods of Combining Multiple Classifiers and Their Applications to Handwriting Recognition. IEEE Transactions on Systems, Man, and Cybernetics 22(3) (1992) 418-435 12. Thomas, L.C., Edelman, D.B., Crook, J.N.: Credit Scoring and its Applications. Society of Industrial and Applied Mathematics, Philadelphia (2002)

The Characteristic Analysis of Web User Clusters Based on Frequent Browsing Patterns Zhiwang Zhang and Yong Shi School of Information of Graduate University of Chinese Academy of Sciences, Chinese Academy of Sciences Research Center on Data Technology and Knowledge Economy, Beijing (100080), China [email protected] Chinese Academy of Sciences Research Center on Data Technology and Knowledge Economy, Graduate University of Chinese Academy of Sciences, Beijing (100080), China [email protected]

Abstract. Web usage mining (WUM) is an important and fast developing area of web mining. Recently, some enterprises have been aware of its potentials, especially for applications in Business Intelligence (BI) and Customer Relationship Management (CRM). Therefore, it is crucial to analyze the behaviors and characteristics of web user so as to use this knowledge for advertising, targeted marketing, increasing competition ability, etc. This paper provides an analytic method, algorithm and procedure based on suggestions from literature and the authors’ experiences from some practical web mining projects. Its application shows combined use of frequent sequence patterns (FSP) discovery and the characteristic analysis of user clusters can contribute to improve and optimize marketing and CRM. Keywords: WUM, FSP, clustering.

1 Introduction Data mining has been used by many organizations to extract the valuable information from large volumes of data and then use them to make critical business decisions. As for WUM, a lot of work mainly focus on web user navigating patterns discovery and association analysis, user and web pages clustering. However, it is insufficient in analyzing the characteristics of web clusters after identifying interesting frequent browsing patterns. In this paper, firstly we introduce related work in WUM and its analytic steps. And then, we discuss the three main steps in WUM, taking an educational web server for example. Here our main work lies in creating a data mart of web usage data, discovering some FSP of web user, providing a method that measures the similarities among different FSP for user clusters, and providing an algorithm and its applications. In the end, a comparison between this algorithm and kmeans, Kohonen is given. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 490–493, 2007. © Springer-Verlag Berlin Heidelberg 2007

The Characteristic Analysis of Web User Clusters

491

2 Related Work 2.1 Taxonomy of Web Mining Web mining involves a wide range of applications that aims at discovering and extracting hidden information in data stored on the Web. Besieds, Web mining can be categorized into three different classes: (i) Web content mining, (ii) Web structure mining and (iii) WUM. For detailed surveys of Web mining please refer to [1]. Respectively, Web content mining [1] is the task of discovering useful information available on-line. Web structure mining [1, 2, 3] is the process of discovering the structure of hyperlinks within the Web. WUM is the task of discovering the activities of the users while they are browsing and navigating through the Web [1]. 2.2 WUM WUM, from the data mining aspect, is the task of applying data mining techniques to discover usage patterns from Web data in order to understand and better serve the needs of users navigating on the Web [1]. As every data mining task, the process of WUM also consists of three main steps: (i) data selecting and preprocessing, (ii) pattern discovery and (iii) pattern analysis.

3 Data Selecting and Preprocessing After we set up a definite business target, it is necessary to extract and select the types of data from data sources. In general, in WUM we may gain the following three types of data: (i) web data: generated by visits to a web site; (ii) business data: produced by the respective OLTP, OLAP, DSS and other systems; (iii) meta data, data describing the web site itself , for example, content and structure, etc. Just as the discussion above, Web log data can mainly get from the internet or intranet resources. In this paper, web log files are from DePaul CTI Web server log. The preprocessed and filtered data is based on a random sample of users visiting this site for a 2 week period which contains 13745 sessions and 683 page views from 5446 users. To analyze, we have developed a data mart to support FSP mining and further the characteristics analysis of user clusters. The data model is reported in Figure1. SDJHYLHZBFRGH SDJHYLHZ,' LWHP5HIHUUH LWHP 

SDJHBYLHZ VHVVLRQ,' SDJHYLHZ,' SDJHYLHZ,'  SDJHYLHZ,' SDJHYLHZ,' FOLFNV

XVHUBVHVVLRQ VHVVLRQ,' XVHU,' SDJHYLHZ,' GXUDWLRQ GDWH 

XVHUBUHJLVWHU XVHU,' ORJLQ,' ORJLQ1DPH 

IHTXHQWBSDWWHUQV XVHU,' VXSSRUW FRQILGHQFH FRQVHTXHQW DQWFHGHQW DQWFHGHQW  DQWFHGHQWN

Fig. 1. Data mart tables and their relations

492

Z. Zhang and Y. Shi

4 Frequent Patterns Discovery 4.1 FSP Discovery In this section, we use the notion of FSP discovery where temporal correlations among items in user transactions are discovered. 4.2 Results of Experimentation In this paper we employ FSP discovery to the above dataset and produce these results in Fig.2 with maximum length be 3 and the top 3 of these FSP numbered 3514. User Support Confidence Antecedent1 ==> Antecedent2 ==> Consequent 4 0.0204 1.0000 /programs/2002/bachelorcs2002.asp ==> /news/default.asp 5 0.0255 1.0000 /admissions/international.asp ==> /admissions/ 4 0.0204 1.0000 /authenticate/login.asp?section=advising&title =appointment&urlahead=apt_making/makeapts ==> /news/default.asp 7 0.0357 1.0000 /cti/core/core.asp?section=news ==> /news/default.asp 4 0.0204 1.0000 /news/news.asp?theid=586 ==> /news/default.asp Fig. 2. FSP on Web server log

5 The Characteristics Analysis of Web User Clusters 5.1 Similarity Measures Given a weighted value set W of the pages in the k − th length FSP fpi , and thus, the similarity between the FSP fpi and fp j is defined as: k ⎧1 if the two pages is same sim( fpi , fp j ) = ∑ wl * eq ( pil , p jl ) , eq ( pil , p jl ) = ⎨ . l =1 ⎩0 otherwise

Here eq (*,*) may be the combined probability distribution function of the fpi and fp j . 5.2 Algorithm

In this part, we give an algorithm which implement maximum spanning tree (MST) clustering based on the above FSP (MSTCBFSP), as the following: Input: FP = { fp1 , fp2 , " , fpm } , W = {w1 , w2 , " , wk } , γ . Output: T // Set of clusters. Processing Flow: // MST clustering based on FSP.

Step one: Compute the similarity of the different FSP and construct a graph G . Step two: Building a MST Tmax on graph G . Step three: Clustering analysis according to the MST Tmax .

The Characteristic Analysis of Web User Clusters

493

5.3 Results of Experimentation

Each gets the five clusters (Table1.) of web user with the different characteristics after we run algorithm MSTCBFSP, k-means and Kohonen on the above data set of FSP. Table 1. The comparison of the results between MSTBFSP and k-means, Kohonen Algorithm

Clusters

K-means

Cluster#1 Cluster#2 Cluster#3 Cluster#4 Cluster#5 Cluster-1 Cluster-2 Cluster-3 Cluster-4 Cluster-5 Cluster+1 Cluster+2 Cluster+3 Cluster+4 Cluster+5

Kohonen

MSTCBFSP

Percent (User) 3.59% 64.54% 24.70% 6.77% 0.40% 23.90% 64.94% 0.80% 3.59% 6.77% 65.74% 22.71% 2.39% 0.40% 8.76%

Antecedent1(Pagevie wID, UserPercent) 359 100% 557 0% 557 100% 359 0% 557 100% 359 0% 359 100% 557 0% 359 0% 557 0% 557 100% 359 0% 557 100% 359 0% 359 0% 557 0% 359 100% 557 0% 359 100% 557 0% 359 0% 557 100% 557 100% 359 0% 557 0% 359 100% 359 0% 557 0% 359 100% 557 0%

Antecedent2(Pagevie wID, UserPercent) 67 100% 1 0% 1 100% 67 0% 67 100% 1 0% 1 0% 67 0% 1 0% 67 0% 1 0% 67 0% 1 100% 67 0% 1 0% 67 100% 1 0% 67 100% 1 0% 67 0% 67 0% 1 100% 1 0% 67 100% 67 100% 1 0% 1 0% 67 0% 1 0% 67 0%

Consequent(Pagevi ewID, UserPercent) 388 100% 666 0% 388 0% 666 0% 388 100% 666 0% 388 0% 666 0% 388 0% 666 100% 388 100% 666 0% 388 0% 666 100% 388 100% 666 0% 388 100% 666 0% 388 0% 666 0% 388 0% 666 100% 388 100% 666 0% 388 100% 666 0% 388 0% 666 100% 388 0% 666 0%

6 Comparison of the Results and Conclusions We use the MSTCBFSP could promptly find a global optimum solution and randomshaped clusters. In contrast, k-means is apt to find a local optimum solution and it does not work on categorical data directly and can only find convex-shaped clusters. Besides, for Kohonen map, the explanation of clustering results is very difficult. In conclusion, the MSTCBFSP is a better method. Consequently, we may try to use the fuzzy MSTCBFSP in the future. Acknowledgements. This research has been partially supported by a grant from National Natural Science Foundation of China (#70621001, #70531040, #70501030, #70472074, #9073020), 973 Project #2004CB720103, Ministry of Science and Technology, China, and BHP Billiton Co., Australia.

References 1. Margaret H. Dunham, Data Mining Introductory and Advanced topics, Prentice Hall (2003) 206-218. 2. Ajith Abraham, Business Intelligence from Web Usage Mining, Journal of Information & Knowledge Management, Vol. 2, No. 4 (2003) 375-390.

A Two-Phase Model Based on SVM and Conjoint Analysis for Credit Scoring Kin Keung Lai, Ligang Zhou, and Lean Yu Department of Management Sciences, City University of Hong Kong, Hong Kong {mskklai,mszhoulg,msyulean}@cityu.edu.hk

Abstract. In this study, we use least square support vector machines (LSSVM) to construct a credit scoring model and introduce conjoint analysis technique to analyze the relative importance of each input feature for making the decision in the model. A test based on a real-world credit dataset shows that the proposed model has good classification accuracy and can help explain the decision. Hence, it is an alternative model for credit scoring tasks. Keywords: support vector machines, conjoint analysis, credit scoring.

1

Introduction

For financial institutions, the ability to predict if a loan applicant or existing customer will default or not is crucial, and an improvement in prediction accuracy can help reduce losses significantly. Most statistic methods and optimization techniques and some new approaches in artificial intelligence have also been used for developing credit scoring models. A comprehensive descriptions of methods being used in credit scoring can be found in a recent survey [1]. Each method has its advantage and disadvantage, so it is difficult to find one model that can perform consistently better than other models in all circumstances. For the measure of classification accuracy, AI technologies can perform better than traditional methods; however, their black-box property make it difficult for the decision makers to use them with adequate confidence. In this study, we introduce a LSSVM [2] approach with the radial basis function (RBF) kernel and adopt an approach based on the principle of design of experiment (DOE) to optimize the parameters [3]. In addition, the conjoint analysis method is used to calculate the relative importance of each input feature for credit risk evaluation. The rest of this paper is organized as follows. Section 2 illustrates the basic concepts in LSSVM and the main process in conjoint analysis and describes our method. In Section 3, we use a real-world dataset to test the proposed method and analyze the results with conjoint analysis. Section 4 provides a conclusion about this study. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 494–498, 2007. c Springer-Verlag Berlin Heidelberg 2007 

A Two-Phase Model Based on SVM and Conjoint Analysis

2

495

Two-Stage Model Based on SVM and Conjoint Analysis

Given a training dataset {xk , yk }N k=1 , we can formulate the LSSVM model in feature space as following [2]: min ζ(w, ξ) =

w, b, ξ

1 T C N w w + ξ2 k=1 k 2 2

Subject to: yk [wT ϕ(xk ) + b] = 1 − ξk, k = 1, ..., N The classifier in the dural space takes the form:   N y(x) = sign αk yk K(x, xk ) + b k=1

(1)

(2)

Where the αk is the Lagrange multiplier, in this study, we chose kernel function to be Radial-basis function (RBF): K(x, xk ) = exp(−||x − xk ||2 /σ 2 ). In the above LSSVM model, there are two parameters to be determined, C and σ. A method inspired by DOE proposed by Staelin [3] can reduce the complexity sharply, relative to grid search method. The main steps of this approach are as follows: 1. Set initial range for Cand σ as [C min, C max], [D min, D max],iter = 1; 2. While iter≤MAXITER do 2.1. According to the pattern as shown in Figure 1, find the 13 points in the space [(C min, D min), (C max, D max)], set C Space=C max-C min, D Space= D max-D min; 2.2. For each of the 13 points which have never been evaluated, carry out the evaluation: set the parameters according to coordinate of this point, run LSSVM via k-fold cross validation, the average classification accuracy is the performance index of this point; 2.3. Choose the point p0(C0, D0) with best performance to be the center, set new range of the searching space. C Space= C Space/2, D Space=D Space/2. If the rectangle area with (C0, D0) as the center and C Space, D Space as width and height exceeds the initial range, adjust the center point until the new search space is contained within the [(C min, D min),(C max, D max)]. 2.4. Set new searching space, C min=C0-C Space/2, C max=C0+C Space/2, D min=D0-D Space/2, D max=D0+D Space/2, iter = iter +1; 3. Return the point with best performance, use its best parameters to create a model for classification. Conjoint analysis is a technique with wide commercial uses, such as prediction of profitability or market share for proposed new product, providing insights about how customers make trade-offs between various service features, etc. For an Nevaluated applicant with feature x , its utility can be defined by the value k=1 αk yk K(x, xk ) + b from the LSSVM model, the larger is this value, the less is the possibility of its default. All the tested applicants can be ranked by their utility. Then the part-worth model is selected to be the utility estimation model

496

K.K. Lai, L. Zhou, and L. Yu Dmax

Dmin Cmin

Cmax

Fig. 1. Sketch of two iterations search with method based on DOE

because of its simplicity and popularity. We choose multiple regression method to estimate the part-worth, and ranking order of the applicants as the measurement scales for dependent variables. The decision manager, are concerned about not only the classification accuracy of the model but also the average importance of each attribute which can be measured by the relative importance of attribute in conjoint analysis. For each model, we can calculate the relative importance of attribute i as the following formula: max Pij − min Pij Ii =  × 100% j = 1, . . . Li n (max Pij − min Pij )

(3)

i=1

Where Pij is the part-worth of level j for attribute i, Li is the number of levels for attribute i, n is the number of attributes.

3

Empirical Study

A real world data set German dataset from UCI is used. it consists of 700 good instances and 300 bad ones. For each instance, 24 input variables described 19 attributes. 5-fold cross validation is used to evaluate the performance of each group of parameters setting for the LSSVM model. We set MAXITER =5, searching space of log2 C is [-5, 15], log2 σ is [-15, 5] and finally get the optimal parameters C = 27.5 , σ = 25.0 . Then we use 10-fold cross validation to test the efficiency of this model and five repetitions are conducted to reduce the stochastic variability of model training and the testing process. The results were also compared with other methods on the same dataset shown in Table 1. The proposed method can provide high classification accuracy, but a major deficiency of the model is the difficulty in explaining the role of the input features for making the decision. We conjoint analysis to calculate the relative importance of each feature after ranking the utility of the testing sample. For the 10-fold cross validation running, there are 10 group testing samples; Figure 2 illustrate the average relative importance of each feature of the 10 group testing samples for the LSSVM+DOE model. From this figure, we can see that only three features of the applicants’ exceed 8%. Although some of the features have less importance, they all contribute to the decision in the LSSVM+DOE model.

A Two-Phase Model Based on SVM and Conjoint Analysis

497

Table 1. Results from different methods on the German credit data set Methods Classification accuracy (%) Std. (%) SVM+Grid Searcha 76.00 3.86 77.50 4.03 SVM+Grid Search + F-scorea 77.92 3.97 SVM+GAa MOEb 75.64 – RBFb 74.60 – MLPb 73.28 – LVQb 68.37 – 57.23 – FARb LS-SVM+DOE 77.96 5.83 a

Results from [4] ,

b

results from [5].

Fig. 2. Relative importance of features for German dataset

4

Conclusion

This study proposes a two-phrase model for credit scoring. The parameters of the LSSVM model are optimized by a searching procedure inspired by Design of Experiment. Then the decision from the LSSVM model is estimated by the conjoint analysis method. The relative importance of attributes derived from conjoint analysis provide the decision makers with some idea about what features of the applicant are of importance for the model and whether the decision is consistent with their past experience. The results show that the proposed model has good classification accuracy and, in some degree, can help the decision makers to explain their decision.

References 1. Thomas, L.C.: A Survey of Credit and Behavioural Scoring: Forecasting Financial Risk of Lending to Consumers. International Journal of Forecasting 16 (2000) 149-172 2. Suykens, J.A.K., Gestel, T.V., Brabanter, J.D., Moor, B.D., Vandewalle, J.: Least Squares Support Vector Machines. World Scientific, Siningapore (2002) 3. STAELIN, C.: Parameter Selection for Support Vector Machines. Tech. Rep., HPL-2002-354 (R. 1), HP Laboratories Israel, (2003)

498

K.K. Lai, L. Zhou, and L. Yu

4. Huang, C.L., Chen, M.C., Wang, C.J.: Credit Scoring with a Data Mining Approach Based on Support Vector Machines. Expert Systems with Applications (2006) doi:10.1016/j.eswa.2006.1007.1007 5. West, D.: Neural Network Credit Scoring Models. Computers & Operations Research 27 (2000) 1131-1152

A New Multi-Criteria Quadratic-Programming Linear Classification Model for VIP E-Mail Analysis Peng Zhang1,2, Juliang Zhang1, and Yong Shi1 1

CAS Research Center on Data Technology and Knowledge Economy, Beijing 100080, China [email protected], [email protected] 2 School of Information Science and Engineering, Graduate University of Chinese Academy of Sciences, Beijing 100080, China

Abstract. In the recent years, classification models based on mathematical programming have been widely used in business intelligence. In this paper, a new Multi-Criteria Quadratic-Programming Linear Classification (MQLC) model is proposed and tested with VIP E-Mail dataset. This experimental study uses a variance of K-fold Cross-Validation to demonstrate the accuracy and stability of the model. Finally, we compare our model with the decision tree by using commercial software C5.0. The result indicates that the proposed MQLC model performs better than decision tree on small samples. Keywords: VIP E-Mail Analysis, Data Mining, Multi-criteria Quadraticprogramming Linear Classification, Cross-Validation, Decision Tree.

1 Introduction Data Mining is a discipline combining a wide range of subjects such as Statistics, Machine Learning, Artificial Intelligence, Neural Network, Database Technology and Pattern recognition [1]. Recently, classification models based on mathematical programming approaches have been introduced in data mining. In 2001, Shi et al [2, 3] proposed a Multiple Criteria Linear Programming (MCLP) model which has been successfully applied to a major US bank credit card portfolio management. In this paper, a new Multi-criteria Quadratic-programming Linear Classification (MQLC) model is proposed and used in the VIP E-Mail dataset provided by a major web hosting company in China. As a new model, the performance and stability of MQLC are focal issues. In order to respond to these challenges, this paper conducts a variance of k-folders cross-validation experiment and compares the prediction accuracy of MQLC with decision tree in software C5.0. Our findings indicate that MQLC is highly stable and performs well in small samples. This paper is organized as the following. Next section is an overview of MQLC model formulation; the third section describes the characteristics of the VIP E-Mail dataset; the fourth section talks about the process and results of cross validation; the fifth section illustrates the comparison study with decision tree in commercial software C5.0; and the last section concludes the paper with summary of the findings. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 499–502, 2007. © Springer-Verlag Berlin Heidelberg 2007

500

P. Zhang, J. Zhang, and Y. Shi

2 Formulation of Multi-Criteria Quadratic-Programming Linear Classification Model Given a set of r attributes a = (a1 ,..., ar ) , let Ai = ( Ai1 ,..., Air ) ∈ R r be one of the

sample observations of these properties. Suppose we predefine two groups G1 and G2 , we can select a boundary b to separate these two groups. A vector X=(x1 ,x 2 ,...,x n ) ∈ R r can be identified to establish the following linear inequality A i x < b, ∀Ai ∈ G1 ;

A i x ≥ b, ∀Ai ∈ G2 ;

[2,

3,

4,

5].

We

define

external

measurement α i to be the overlapping distance of the two-group boundary for record Ai . If Ai ∈ G1 but we misclassified it into G2 or vice versa, α i will equal to | Ai x − b | . Then we define internal measurement βi to be the distance of record Ai from its adjusted boundary b* . If Ai is correctly classified, distance βi will equal to | Ai x − b* | , where b* = b + α i or b* = b − α i . Suppose f (α ) denotes the relationship of all overlapping α i while g ( β ) denotes the aggregation of all distances βi . The final absolute catch rates depend on simultaneously minimizing f (α ) and maximizing g ( β ) , the most common way of representation is by normal value. Given weights wα , wβ >0 and let f (α ) = || α + c1 || pp , g ( β ) = || β + c2 ||qq , the generalized model can be converted into single criterion mathematical programming model as: p q (Normal Model) Minimize wα || α + c1 || p − wβ || β + c2 ||q

(1)

Subject to: Ai x − α i + β i − b = 0, ∀Ai ∈ G1 Ai x + α i − β i − b = 0, ∀Ai ∈ G2

α i , β i ≥ 0, i = 1,..., n

Based on the Normal Model, α , β , c1 , and c2 can be randomly chosen. When we set p=2, q=1 and n

n

i =1

i =1

c1 =1, c2 =0, we can formulate the objective function as

wα ∑ (α i + 1) 2 − wβ ∑ β i . After expanding it, we can write the model as follows: n

n

n

i =1

i =1

i =1

(MQLC Model) Minimize wα ∑ α i 2 + 2w α ∑ α i − w β ∑ β i

(2)

Subject to: Ai x − α i + β i − b = 0, ∀Ai ∈ G1 Ai x + α i − β i − b = 0, ∀Ai ∈ G2 α i , β i ≥ 0, i = 1,..., n

In MQLC model, the normal value of α is larger than β implies that the penalty for misclassifying Ai is more severe than not enlarging the distance from the adjusted

A New MQLC Model for VIP E-Mail Analysis

501

boundary. Since it is possible that α i is less than 1, in such scenario the objective value will decrease when we square α i . To make sure the penalty is aggravated when misclassification occurs, MQLC add 1 to all α i before squaring them.

3 VIP E-Mail Dataset Our partner company’s VIP E-Mail data are mainly stored in two kinds of repository systems; one is databases manually recorded by employee, which was initially produced to meet the needs of every kind of business service; the other is log files recorded automatically by machine, which contains the information about customer login, E-Mail transaction and so on. After integrating all these data with the keyword SSN, we finally acquire the Large Table which has 185 features from log files and 65 features from database. Considering the customer records integrity, we eventually extracted two groups of customer record, the current customers and the lost customers, 4998 records in each group respectively. Combining these 9996 SSN with the 230 features, we eventually acquired the Large Table for data mining.

4 Empirical Study of Cross-Validation In this paper, a variance of k-fold cross-validation is used on VIP E-Mail dataset; each calculation is consisting of training with one of the subsets and testing with the other k-1 subsets. We have computed 10 group dataset. The accuracy of 10 group training set is extremely high, with the average accuracy on the lost user of 88.48% and on the Current user of 91.4%. When we concentrate on the 10 group testing set, the worst and best classification catch rates are 78.72% and 82.17% for the lost customers and 84.59% and 89.04 % for the Current customers. That is to say, the absolute catch rates of the lost class are above 78% and the absolute catch rates of the Current class are above 84%. The result indicates that a good separation of the lost class and Current class is observed with this method.

5 Comparison of MQLC and Decision Tree The following experiment compares MQLC and Decision Tree in the commercial software C5.0. From the results of Table II, we can see that when the percentage of training set is increased from 2% to 25%, the accuracy of Decision Tree increases from 71.9% to 83.2% while the accuracy of MQLC increases from 78.54% to 83.56%. The accuracy of MQLC is slightly better than Decision Tree. Moreover, when training set is 10% of the whole dataset, the accuracy of MQLC peaked at 83.75%. That is to say, MQLC can perform well even on small samples. The reason maybe due to the fact that MQLC solves a convex quadratic problem, which can acquire the global optimal solution easily, on the other hand, Decision Tree merely selects the better tree from a limited built tree, which might not be the best tree. In addition, the pruning procedure of Decision Tree may further eliminate some better branches. In conclusion, MQLC performs better than Decision Tree on small samples.

502

P. Zhang, J. Zhang, and Y. Shi Table 1. Comparison of MQLC and Decision Tree

Percentage of training

Decision Tree

MQLC

2% 5%

Training 92.7% 94.9%

Testing 71.9% 77.3%

Training 96.0% 92.8%

Testing 78.54% 82.95%

10% 25%

95.2% 95.9%

80.8% 83.2%

90.7% 88.95%

83.75% 83.56%

6 Conclusion In this paper, a new Multi-criteria Quadratic-programming Linear Classification (MQLC) Model has been proposed to the classification problems in data mining. 230 features are extracted from the original data source to depict all the VIP E-Mail users, and 9996 records are chosen to test the performance of MQLC. Through the results of cross-validation, we can see that the model is extremely stable for multiple groups of randomly generated training set and testing set. The comparison of MQLC and Decision Tree in C5.0 tells us that MQLC performs better than Decision Tree on small samples. There have been experiments that show the accuracy of MQLC is not affected by the parameters of the objective function, further research will include mathematically proving this phenomenon. Acknowledgments. This research has been partially supported by a grant from National Natural Science Foundation of China (#70621001, #70531040, #70501030, #70472074), 973 Project #2004CB720103, Ministry of Science and Technology, China, NSFB(No.9073020) and BHP Billiton Co., Australia.

References 1. Han, J. W., M. Kamber: Data Mining: Concepts and Techniques. San Diego. CA: Academic Press (2000) ch.1. 2. Shi, Y., Wise, M., Luo, M. and Lin, Y.: Data mining in credit card portfolio management: a multiple criteria decision making approach. in Multiple Criteria Decision Making in the New Millennium, M. Koksalan and S.Zionts, eds., Berlin: Springer (2001) 427-436 3. Shi, Y., Peng, Y., Xu, W., Tang, X.: Data Mining via Multiple Criteria Linear Programming: Applications in Credit Card Portfolio Management. International Journal of Information Technology and Decision Making, vol. 1(2002) 131-151. 4. Gang Kou, Yi Peng, Yong Shi, Weixuan Xu: A Set of Data Mining Models to Classify Credit Cardholder Behavior. International Conference on Computational Science (2003) 54-63 5. Yi Peng, Gang kou, Zhengxin Chen, Yong Shi: Cross-Validation and Ensemble Analyses on Multiple-Criteria Linear Programming Classification for Credit Cardholder Behavior. International Conference on Computational Science (2004) 931-939

Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets Nargess Memarsadeghi1,2 and David M. Mount2 NASA/GSFC, Code 588, Greenbelt, MD, 20771 [email protected] University of Maryland, College Park, MD, 20742 [email protected]

1

2

Abstract. Interpolating scattered data points is a problem of wide ranging interest. One of the most popular interpolation methods in geostatistics is ordinary kriging. The price for its statistical optimality is that the estimator is computationally very expensive. We demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods. Keywords: Geostatistics, kriging, tapering, iterative methods.

1

Introduction

Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry [1,2]. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging [3]. This is because it is a best unbiased estimator [4,3,5]. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n × n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small number of points that lie close to the query point [3,5]. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution [5]. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders [6]. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 503–510, 2007. c Springer-Verlag Berlin Heidelberg 2007 

504

N. Memarsadeghi and D.M. Mount

Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering [7]. The problems arise from the fact that the covariance functions that are used in kriging have global support. In tapering these functions are approximated by functions that have only local support, and that possess certain necessary mathematical properties. This achieves greater efficiency by replacing large dense kriging systems with much sparser linear systems. Covariance tapering has been successfully applied to a restriction of our problem, called simple kriging [7]. Simple kriging is not an unbiased estimator for stationary data whose mean value differs from zero, however. We generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require O(n3 ) time and O(n2 ) storage. As Billings et al. suggested, we use an iterative approach [8]. In particular, we use the symmlq method [9], for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points’ covariance matrix for kriging should be symmetric positive definite [3,10]. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix [7]. Thus, Cholesky factorization [11] could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system [10]. After obtaining a sparse ordinary kriging linear system through tapering, we use symmlq to solve it[9]. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse symmlq method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process. We start with a brief review of the ordinary kriging in Section 2. In Section 3 the tapering properties are mentioned. We introduce our approaches for solving the ordinary kriging problem in Section 4. Section 5 describes data sets we used. Then, we describe our experiments and results in Section 6. Section 7 concludes the paper. Full version of our paper has details that were omitted here [10].

Efficient Implementation of an Optimal Interpolator

2

505

Ordinary Kriging

Kriging is an interpolation method named after Danie Krige, a South African mining engineer, who pioneered in the field of geostatistics [5]. Kriging is also referred to as the Gaussian process predictor in the machine learning domain [12]. Kriging and its variants have been traditionally used in mining and geostatistics applications [4,5,3]. The most commonly used variant is called ordinary kriging, which is often referred to as a BLUE method, that is, a Best Linear Unbiased Estimator [3,7]. Ordinary kriging is considered to be best because it minimizes the variance of the estimation error. It is linear because estimates are weighted linear combination of available data, and is unbiased since it aims to have the mean error equal to zero [3]. Minimizing the variance of the estimation error forms the objective function of an optimization problem. Ensuring unbiasedness of the error imposes a constraint on this objective function. Formalizing this objective function with its constraint results in the following system [10,3,5].      w C0 C L = , (1) μ 1 Lt 0 where C is the matrix of points’ pairwise covariances, L is a column vector of all ones and of size n, and w is the vector of weights wi , . . . , wn . Therefore, the minimization problem for n points reduces to solving a linear system of size (n + 1)2 , which is impractical for very large data sets via direct approaches. It is also important that matrix C be positive definite [10,3]. Note that the coefficient matrix in the above linear system is a symmetric matrix which is not positive definite since it has a zero entry on its diagonal. Pairwise covariances are modeled as a function of points’ separation. These functions should result in a positive definite covariance matrix. Christakos [13] showed necessary and sufficient conditions for such permissible covariance functions. Two of these valid covariance functions, are the Gaussian and Spherical covariance functions (Cg and Cs respectively). Please see [13,3,5] for details of these and other permissible covariance functions.

3

Tapering Covariances

Tapering covariances for the kriging interpolation problem, as described in [7], is the process of obtaining a sparse representation of the points’ pairwise covariances so that positive definiteness of the covariance matrix as well as the smoothness property of the covariance function be preserved. The sparse representation via tapering is obtained through the Schur product of the original positive definite covariance matrix by another such matrix. Ctap (h) = C(h) × Cθ (h).

(2)

The tapered covariance matrix, Ctap , is zero for points that are more than a certain distance apart from each other. It is also positive definite since it is the

506

N. Memarsadeghi and D.M. Mount

Schur product of two positive definite matrices. A taper is considered valid for a covariance model if it perseveres its positive-definiteness property and makes the approximate system’s solution converge to the original system’s solution. The authors of [7] mention few valid tapering functions. We used Spherical, W endland1 , W endland2 , and T opHat tapers [7]. These tapers are valid for R3 and lower dimensions [7]. Tapers need to be as smooth as the original covariance function at origin to guarantee convergence to the optimal estimator [7]. Thus, for a Gaussian covariance function, which is infinitely differentiable, no taper exists that satisfies this smoothness requirement. However, since tapers proposed in [7] still maintain positive definiteness of the covariance matrices, we examined using these tapers for Gaussian covariance functions as well. We are using these tapers mainly to build a sparse approximate system to our original global system even though these tapers do not guarantee convergence to the optimal solution of the original global dense system theoretically.

4

Our Approaches

We implemented both local and global methods for the ordinary kriging problem. Local Methods: This is the traditional and the most common way of solving kriging systems. That is, instead of considering all known values in the interpolation process, points within a neighborhood of the query point are considered. Neighborhood sizes are defined either by a fixed number of points closest to the query point or by points within a fixed radius from the query point. Therefore, the problem is solved locally. We experimented our interpolations using both of these local approaches. We defined the fixed radius to be the distance beyond which correlation values are less than 10−6 of the maximum correlation. Similarly, for the fixed number approach, we used maximum connectivity degree of points’ pairwise covariances, when covariance values are larger than 10−6 of the maximum covariance value. Gaussian elimination [14] was used for solving the local linear systems in both cases. Global Tapered Methods: In global tapered methods we first redefine our points’ covariance function to be the tapered covariance function obtained through Eq. (2), where C(h) is the points’ pairwise covariance function, and Cθ (h) is a tapering function. We then solve the linear system using the symmlq approach as mentioned in [9]. Note that, while one can use conjugate gradient method for solving symmetric systems, the method is guaranteed to converge only when the coefficient matrix is both symmetric and positive definite [15]. Since ordinary kriging systems are symmetric and not positive definite, we used symmlq. We implemented a sparse symmlq method, similar to the sparse conjugate gradient method in [16]. In [16]’s implementation, matrix elements that are less than or equal to a threshold value are ignored. Since we obtain sparseness through tapering, this threshold value for our application is zero. Global Tapered and Projected Methods: This implementation is motivated by numerous empirical results in geostatistics indicating that interpolation weights associated with points that are very far from the query point tend to be

Efficient Implementation of an Optimal Interpolator

507

close to zero. This phenomenon is called the screening effect in the geostatistical literature [17]. Stein showed conditioned under which the screening effect occurs for gridded data [17]. While the screening effect has been the basis for using local methods, there is no proof of this empirically supported idea for scattered data points [7]. We use this conjecture for solving the global ordinary kriging system Ax = b and observing that many elements of b are zero after tapering. This indicates that for each zero element bi , representing the covariance between the query point and the ith data point, we have Ci0 = 0. Thus, we expect their associated interpolation weight, wi , to be very close to zero. We assign zero to such wi ’s, and consider solving a smaller system A x = b , where b consists of nonzero entries of b. We store indices of nonzero rows in b in a vector called indices. A contains only those elements of A whose row and column indices both appear in indices. This method is effectively the same as the fixed radius neighborhood size, except that the local neighborhood is found adaptively. There are several differences between this approach and the local methods. One is that we build the global matrix A once, and use relevant parts of it, contributing to nonzero weights, for each query point. Second, for each query, the local neighborhood is found adaptively by looking at covariance values in the global system. Third, the covariance values are modified.

5

Data Sets

As mentioned before, we cannot solve the original global systems exactly for very large data sets, and thus cannot compare our solutions with respect to the original global systems. Therefore, we need ground truth values for our data sets. Also, since performance of local approaches can depend on data points’ density around the query point, we would like our data sets to be scattered nonuniformly. Therefore, we create our scattered data sets by sampling points of a large dense grid from both uniform and Gaussian distributions. We generated our synthetic data sets using the Sgems [18] software. We generated values on a (1000 × 1000) grid, using the Sequential Gaussian Simulation (sgsim) algorithm of the Sgems software [19,18]. Points were simulated through ordinary kriging with a Gaussian covariance function of range equal to 12, using a maximum of 400 neighboring points within a 24 unit radius area. Then, we created 5 sparse data sets by sampling 0.01% to 5% of the original simulated grid’s points. This procedure resulted in sparse data sets of sizes ranging from over 9K to over 48K. The sampling was done so that the concentration of points in different locations vary. For each data set, 5% of the sampled points were from 10 randomly selected Gaussian distributions. The rest of the points were drawn from the uniform distribution. Details of the real data tests and results are in our full paper [10].

6

Experiments

All experiments were run on a Sun Fire V20z running Red Hat Enterprise release 3, using the g++ compiler version 3.2.3. Our software is implemented in c++,

508

N. Memarsadeghi and D.M. Mount

using the GsTL and ANN libraries [19,20]. GsTL is used to build and solving the linear systems. ANN is used for finding nearest neighbors for local approaches. For each input data we examined various ordinary kriging methods on 200 query points. Half of these query points were sampled uniformly from the original grids. The other 100 query points were sampled from the Gaussian distributions. We tested both local and global methods. Local methods used Gaussian elimination for solving the linear systems while global tapered methods used sparse symmlq. Running times are averaged over 5 runs. We examined methods mentioned in Section 4. Global approaches require selection of a tapering function. For synthetic data, we examined all tapers mentioned in Section 3. Even though there is no taper which is as smooth as the Gaussian model to guarantee convergence to the optimal estimates, in almost all cases, we obtained lower estimation errors when using global tapered approaches. As expected, smoother functions result in lower estimation errors. Also, results from tapered and projected cases are comparable to their corresponding tapered global approaches. In other words, projecting the global tapered system did not significantly affect the quality of results compared to the global tapered approach in our experiments. In most cases, Top Hat and Spherical tapers performed similar to each other with respect to the estimation error, and so did Wendland tapers. Wendland tapers give the lowest overall estimation errors. Among Wendland tapers, Wendland1 has lower CPU running times for solving the systems. Figure 1 shows the results when W endland1 taper was used. For local approaches, using fixed radius neighborhoods resulted in lower errors for query points from the Gaussian distribution. Using fixed number of neighbors seems more appropriate for uniformly sampled query points. Considering maximum degree of points’ covariance connectivity as number of neighbors to use in the local approach requires extra work and longer running times compared to the fixed radius approach. The fixed radius local approach is faster than the fixed neighborhood approach by 1-2 orders of magnitude for the uniform query points, and is faster within a constant factor to an order of magnitude for query points from clusters, while giving better or very close by estimations compared to the fixed number of neighbors approach (Tables 1 and 2). Tapering, used with sparse implementations for solving the linear systems, results in significant memory savings. Table 3 reports these memory savings for synthetic data to be a factor of 392 to 437. Table 1. Average CPU Times for Solving the System over 200 Random Query Points n 48513 39109 29487 19757 9951

Local Fixed Fixed Num Radius 0.03278 0.00862 0.01473 0.00414 0.01527 0.00224 0.00185 0.00046 0.00034 0.00010

Top Hat 8.456 4.991 2.563 0.954 0.206

Tapered Global Top Hat Spherical Spherical W1 W1 Projected Projected Projected 0.01519 7.006 0.01393 31.757 0.0444 0.00936 4.150 0.00827 17.859 0.0235 0.00604 2.103 0.00528 08.732 0.0139 0.00226 0.798 0.00193 02.851 0.0036 0.00045 0.169 0.00037 00.509 0.0005

W2 57.199 31.558 15.171 05.158 00.726

W2 Projected 0.04515 0.02370 0.01391 0.00396 0.00064

Efficient Implementation of an Optimal Interpolator

509

Table 2. Average Absolute Errors over 200 Randomly Selected Query Points n 48513 39109 29487 19757 9951

Local Fixed Fixed Num Radius 0.416 0.414 0.461 0.462 0.504 0.498 0.569 0.562 0.749 0.756

Tapered Global Top Top Hat Spherical Spherical W1 W1 Hat Projected Projected Projected 0.333 0.334 0.336 0.337 0.278 0.279 0.346 0.345 0.343 0.342 0.314 0.316 0.429 0.430 0.430 0.430 0.384 0.384 0.473 0.474 0.471 0.471 0.460 0.463 0.604 0.605 0.602 0.603 0.608 0.610

W2 0.276 0.313 0.372 0.459 0.619

W2 Projected 0.284 0.322 0.382 0.470 0.637

Table 3. Memory Savings in the Global Tapered Coefficient Matrix n 48513 39109 29487 19757 9951

(n + 1)2 (Total Elements) 2,353,608,196 1,529,592,100 869,542,144 39,0378,564 99,042,304

Stored % Stored Elements 5,382,536 0.229 3,516,756 0.230 2,040,072 0.235 934,468 0.239 252,526 0.255

Average Absolute Error Over 200 Query Points

Savings Factor 437.267 434.944 426.231 417.755 392.206

Average CPU Time for Solving the System Over 200 Query Points

100

1

0.8 0.7 0.6 0.5 0.4 0.3 0.2

Fixed Num Fixed Radius Wendland1 Tapered Wendland1 Tapered & Projected

0.1 0 10000

20000 30000 40000 Number of Scattered Data Points (n)

Average CPU Running Time

Average Absolute Error

0.9

10 1 1E-1 1E-2 1E-3 1E-4 1E-5 10000

Fixed Num Fixed Radius Wendland1 Tapered Wendland1 Tapered & Projected

20000 30000 40000 Number of Scattered Data Points (n)

Fig. 1. Left: Average Absolute Errors. Right: Average CPU Running Times.

7

Conclusion

Solving very large ordinary kriging systems via direct approaches is infeasible for large data sets. We implemented efficient ordinary kriging algorithms through utilizing covariance tapering [7] and iterative methods [14,16]. Furrer et al. [7] had utilized covariance tapering along with sparse Cholesky decomposition to solve simple kriging systems. Their approach is not applicable to the general ordinary kriging problem. We used tapering with sparse symmlq method to solve large ordinary kriging systems. We also implemented a variant of the global tapered method through projecting the global system on to an appropriate smaller system. Global tapered methods resulted in memory savings ranging from a factor of 4.54 to 437.27. Global tapered iterative methods gave better estimation errors compared to the local approaches. The estimation results of the global tapered method were very close to the global tapered and projected method. The

510

N. Memarsadeghi and D.M. Mount

global tapered and projected method solves the linear systems within order(s) of magnitude faster than the global tapered method.

Acknowledgements We would like to thank Galen Balcom for his contributions to the c++ implementation of the symmlq algorithm.

References 1. Amidror, I.: Scattered data interpolation methods for electronic imaging systems: a survey. J. of Electronic Imaging 11 (2002) 157–176 2. Alfeld, P.: Scattered data interpolation in three or more variables. Mathematical methods in computer aided geometric design (1989) 1–33 3. Isaaks, E.H., Srivastava, R.M.: An Introduction to Applied Geostatistics. Oxford University Press (1989) 4. Journel, A., Huijbregts, C.J.: Mining Geostatistics. Academic Press Inc (1978) 5. Goovaerts, P.: Geostatistics for Natural Resources Evaluation. Oxford University Press, Oxford (1997) 6. Meyer, T.H.: The discontinuous nature of kriging interpolation for digital terrain modeling. Cartography and Geographic Information Science, 31 (2004) 209–216 7. Furrer, R., Genton, M.G., Nychka, D.: Covariance tapering for interpolation of large spatial datasets. J. of Computational and Graphical Statistics 15 (2006) 502–523 8. Billings, S.D., Beatson, R.K., Newsam, G.N.: Interpolation of geophysical data using continuous global surfaces. Geophysics 67 (2002) 1810–1822 9. Paige, C.C., Saunderszi, M.A.: Solution of sparse indefinite systems of linear equations. SIAM J. on Numerical Analysis 12 (1975) 617–629 10. Memarsadeghi, N., Mount, D.M.: Efficient implementation of an optimal interpolator for large spatial data sets. Technical Report CS-TR-4856, Computer Science Department, University of Maryland, College Park, MD, 20742 (2007) 11. Loan, C.F.V.: Intro. to Scientific Computing. 2nd edn. Prince-Hall (2000) 12. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press (2006) 13. Christakos, G.: On the problem of permissible covariance and variogram models. Water Resources Research 20 (1984) 251–265 14. Nash, S.G., Sofer, A.: Linear and Nonlinear Programming. McGraw-Hill Companies (1996) 15. Shewchuk, J.R.: An intro. to the conjugate gradient method without the agonizing pain. CMU-CS-94-125, Carnegie Mellon University (1994) 16. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes in C++, The Art of Scientific Computing. Cambridge University Press (2002) 17. Stein, M.L.: The screening effect in kriging. Annals of Statistics 1 (2002) 298–323 18. Remy, N.: The Stanford Geostatistical Modeling Software (S-GeMS). SCRC Lab, Stanford University. (2004) 19. Remy, N.: GsTL: The Geostatistical Template Library in C++. Master’s thesis, Department of Petroleum Engineering of Stanford University (2001) 20. Mount, D.M., Arya, S.: ANN: A library for approximate nearest neighbor searching. http://www.cs.umd.edu/ mount/ANN/ (2005)

Development of an Efficient Conversion System for GML Documents Dong-Suk Hong, Hong-Koo Kang, Dong-Oh Kim, and Ki-Joon Han School of Computer Science & Engineering, Konkuk University, 1, Hwayang-Dong, Gwangjin-Gu, Seoul 143-701, Korea {dshong,hkkang,dokim,kjhan}@db.konkuk.ac.kr

Abstract. In this paper, we designed and implemented a Conversion System for GML documents, and evaluated the performance of the system. The conversion system that uses the BXML-based binary encoding specification can directly display geographic information from BXML documents and can convert GML documents to BXML documents without loss of information, and vice versa. BXML is generally more effective in scanning cost and space requirement than GML and coordinate values in the form of BXML can be used directly without conversion. Keywords: Conversion System, GML, BXML, OGC, Geographic Information.

1 Introduction Recently, OGC(Open GIS Consortium) has presented the GML(Geography Markup Language) specification[3,4] for the interoperability of the geographic information. However, since the GML documents are encoded in text and its tag is very repetitive, it yields problems such as large data size, slow transmission time, and enormous document scanning cost[6]. The major method to reduce the size of the text includes compressing such as GZIP[1]. However, as the data compressed by GZIP must be decompressed into the original GML document, the document scanning cost and the coordinate-value converting cost increase enormously. OGC has proposed the BXML(Binary-XML) encoding specification[5] that can encode the GML document into binary XML format and reduce the document size by removing the repetitive tag names and attributes. And, the BXML encoding specification also can reduce the coordinate-value converting cost to display geographic information by encoding coordinate values into binary values. In this paper, we designed and implemented a Conversion System for GML documents. The system uses the BXML-based binary encoding specification proposed by OGC. BXML documents are generally more effective in scanning cost and space requirement than GML documents. Especially, coordinate values in the form of BXML documents can be used directly without conversion. The system can directly display geographic information from BXML documents and can convert GML documents to BXML documents without loss of information, and vice versa. In addition, this paper analyzed the performance results of the system. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 511–514, 2007. © Springer-Verlag Berlin Heidelberg 2007

512

D.-S. Hong et al.

2 Related Works OGC proposed the GML specification for interoperability[3,4]. GML is an XML encoding in compliance with ISO 19118 for the transport and storage of geographic information modeled according to the conceptual modeling framework used in the ISO 19100 series and including both the spatial and non-spatial properties of geographic features[2,3]. GML as presently encoded using plain-text XML[7] has three major performance problems[5]: the text in the XML structure is highly redundant and bulky, making it slow to transfer over the Internet; the lexical scanning of XML is unexpectedly costly; and the conversion of text-encoded numerical coordinate and observation values is also very costly. BXML was proposed for more effective transmission by reducing the size of the XML document without changes in contents for a limited environment[5]. If an XML file is used as a database of objects with a specific attribute used as a primary key, then it will be greatly more efficient to randomly seek directly to the data tokens that encode those objects. BXML can directly represent raw binary data without any indirect textual-encoding methods, and a backward-compatibility mechanism is provided to enable the translation raw-binary blocks into an equivalent XML representation.

3 System Development Fig. 1 shows the architecture of the conversion system for GML documents presented in this paper.

Fig. 1. System Architecture

This system is divided into a server and a client. The server system is composed of a GML document analysis module to analyze GML documents, an encoding module to encode GML documents into BXML documents, a BXML document analysis module to analyze BXML documents, a decoding module to decode BXML documents into GML documents, a display module to read the spatial data from BXML documents for display, and a network transmitting module to transmit the data encoded into BXML documents through network. The client system is composed of a network transmitting module, a BXML document analysis module, and a display module.

Development of an Efficient Conversion System

513

In order to convert the GML document into the BXML document, the GML document is analyzed, made into tokens, and encoded into the corresponding binary values based on the code values. Fig. 2 and Fig. 3 shows an example of a GML document and a BXML document converted from the GML document of Fig. 2, respectively. In addition, In order to convert the BXML document into the original GML document, the BXML document is analyzed and the token types are classified. The display module derives out the spatial data from a BXML document and displays it on the screen as maps. Fig. 4 shows the screen display from the BXML document of Fig. 3.

Fig. 2. GML Document

Fig. 3. BXML Document

Fig. 4. Screen Display

4 Performance Evaluation This chapter examines the performance evaluation results of the conversion system for GML documents. The GML documents used are 20Mbytes, 40Mbytes, 60Mbytes, and 80Mbytes and are composed of point data, polyline data, and polygon data. Fig. 5 illustrates the document size when the GML documents are compressed by GZIP and when the GML documents are encoded into the BXML documents. As shown in Fig 5, the compression of the GML document using GZIP achieves the better compression rate. However, the GZIP compression does not consider document scanning but solely reduces the data size. Therefore, the compressed GZIP document should be decompressed and restored to its original state in order for us to use the data contained within. On the other hand, encoding into the BXML document can save more document scanning cost than the GZIP compression. Fig. 6 illustrates the display time of the geographic information of the GML document and the BXML document. As shown in Fig 6, using the BXML document rather than the GML document achieves faster display time and the gap between the two document types increases as the file size enlarges. In case of the GML document, since all data are encoded in texts, the coordinate values must be separated for coordinate value conversion. However, in case of the BXML document, since its geographic data are encoded in its original data type, faster speed is possible as it can be used without any type conversion process.

514

D.-S. Hong et al.

Fig. 5. Comparison of Compression Size

Fig. 6. Comparison of Display Time

5 Conclusion This paper designed and implemented the conversion system for GML documents that can convert GML documents into BXML documents without loss of information, and vice versa. The system also can directly display geographic information from the BXML documents. By using the conversion system, it is possible to enhance the transmission speed by reducing the GML document size and also can enhance the document scanning speed for faster display of the geographic information of the BXML documents.

Acknowledgements This research was supported by the Seoul Metropolitan Government, Korea, under the Seoul R&BD Program supervised by the Seoul Development Institute.

References 1. IETF RFC 1952: GZIP File Format Specification Version 4.3, http://www.ietf.org/ rfc/rfc1592.txt (1996). 2. ISO/TC 211: ISO 19136 Geographic Information - Geography Markup Language(GML) (2004). 3. OpenGIS Consortium, Inc.: Geography Markup Language(GML) Implementation Specification 3.1.0. (2004). 4. OpenGIS Consortium, Inc.: Geography Markup Language(GML) Version 3.1.1 Schemas (2005). 5. OpenGIS Consortium, Inc.: Binary-XML Encoding Specification 0.0.8. (2003). 6. Piras, A., Demontis, R., Vita, E. D., Sanna, S.: Compact GML: Merging Mobile Computing and Mobile Cartography, Proceedings of the 3rd GML and Geo-Spatial Web Service Conference (2004). 7. W3Consortium: Extensible Markup Language (XML) 1.0., (Second Edition), http://www.w3.org/ TR/REC-xml (2000).

Effective Spatial Characterization System Using Density-Based Clustering Chan-Min Ahn1, Jae-Hyun You2, Ju-Hong Lee3,*, and Deok-Hwan Kim4 1,2,3

Department of Computer science and Engineering, Inha-University, Korea 4 Department of Electronic and Engineering, Inha-University, Korea 1,2 {ahnch1, you}@datamining.inha.ac.kr, 3,4 {juhong, deokhwan}@inha.ac.kr

Abstract. Previous spatial characterization methods does not analyze well spatial regions for a given query since it only focus on characterization for user’s pre-selected area and without consideration of spatial density. Consequently, the effectiveness of characterization knowledge is decreased in these methods. In this paper, we propose a new hybrid spatial characterization system combining the density-based clustering module which consists of the attribute removal generalization and the concept hierarchy generalization. The proposed method can generate characteristic rule and apply density-based clustering to enhance the effectiveness of generated rules.

1 Introduction Recently, the need for the spatial data mining has increased in many applications such as geographic information system, weather forecasting, and market analysis, etc. The study on spatial data mining techniques is as follows: the spatial characteristic rule to extract summary information of spatial and non-spatial data, spatial association rule to find the spatial relationship between data, and the spatial clustering to partition spatial data objects into a set of meaningful subclasses, and so on [6,7,8,9]. Especially, spatial characterization is one of methods for discovering knowledge by aggregating non-spatial data and spatial data [4]. Many previous data mining systems support spatial characterization methods. Economic Geography and GeoMiner are the representative spatial data mining system which support spatial knowledge discovery [7, 9]. Ester et al. suggest Economic Geography system based on BAVIRIA database [9]. Economic Geography uses a part of entire database and performs characterization by using the relative frequency of spatial and non-spatial data. That is, it increases effectiveness of characterization using only relative frequencies of objects nearest to target object. GeoMiner system enhances DBMiner system to treat both spatial and nonspatial data [7]. This system enables knowledge discovery for association, clustering, characterization, and classification. It also presents both NSD(Non-Spatial Data *

Corresponding author.

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 515–522, 2007. © Springer-Verlag Berlin Heidelberg 2007

516

C.-M. Ahn et al.

Dominant Generalization) and SD(Spatial data Dominant Generalization) algorithms for spatial characterization. GeoMiner has the advantage of presenting the appropriate type of discovered knowledge according to the user’s needs. The previous characterization systems have the problems as follows: first, their characterization methods do not analyze well spatial regions for a given user query since they only use regions predefined by domain experts. Second, the user should describe the query range directly to reduce the search scope [6, 8]. In this paper, to solve the previous problems, we propose an effective spatial characterization method combining spatial data mining module which extends generalization technique and density-based clustering module. The density-based clustering module enables the system to analyze the spatial region and to reduce the search scope for a given user query. Our characterization method can generate useful characteristic rule and apply the density-based clustering to enhance the effectiveness of generated rules. Its effectiveness is measured by information gain which uses the entropy of selected data set [1]. The rest of the paper is organized as follows: in Section 2, we describe the proposed spatial characterization method using density-based clustering. In Section 3, we introduce a new characterization system implementing the suggested method. Section 4 contains the result of our experiment. Finally, Section 5 summarizes our works.

2 Spatial Characterization Method Using Density-Based Clustering In this section, we describe a spatial characterization method using density-based clustering. Our characterization method first performs the density-based clustering on spatial data and then extracts summary information by aggregating non-spatial data and spatial data with respect to each cluster. 2.1 Proposed Spatial Characterization Method Spatial characterization extracts the summary information of data class with respect to the search scope from the given spatial data and non-spatial data. The proposed spatial characterization method consists of five steps shown in Fig.1. The first step is to retrieve task-relevant data in the search scope from spatial database. Task-relevance data are spatial data and non-spatial data of database tuples related with the user query. After that, we perform density based clustering with respect to spatial data of retrieved task-relevant data. Note that this step is described separately in Section 2.2. As a result of clustering, the search scope is reduced to the objects within the selected spatial clusters. In the third and forth step, for the non-spatial attribute of each object in the spatial clusters, generalization is performed until the specific user threshold is satisfied. The proposed generalization module consists of an attribute removal generalization and a concept hierarchy generalization.

Effective Spatial Characterization System Using Density-Based Clustering

517

Algorithm 1. Spatial characterization algorithm Given : Concept hierarchy, User threshold Method : 1. Task-relevant data are retrieved by a user query. 2. Perform density-based clustering with respect to spatial data obtained from step 1. 3. Repeat generalization until a given user threshold is satisfied. (1) Remove useless non-spatial attribute in task-relevant data. (2) If concept hierarchy is available, generalize task-relevant data to highlevel concept data. 4. Apply aggregate operation to non-spatial data obtained from step 3. 5. Return spatial characteristic rules.

Fig. 1. Spatial characterization algorithm

The attribute removal generalization is used in the process of converting nonspatial task-relevant data into summary information. The attributes are removed in the following cases: (1) tuple values with respect to a specific attribute are all distinct or the attribute has more distinct tuple values than the user threshold. (2) The tuple value with respect to a specific attribute cannot be replaced with high level concept data. After attribute removal steps, non-spatial task-relevant data is converted to summary information using concept hierarchy. A concept hierarchy is a sequence of mappings from a set of low-level raw data to more general high-level concepts which represent summary information [5]. At the end, the algorithm returns spatial characteristic rules. This generalization algorithm is the key point of the proposed spatial characterization. Extracting generalized knowledge from spatial database using spatial characterization requires generalization of spatial data and non-spatial data. Thus, the generalization module performs the generalization task for non-spatial task-relevant data by using the user defined concept hierarchy. 2.2 Density-Based Spatial Clustering Spatial data mining uses both non-spatial data and spatial data. A tuple in the database includes a spatial object. Spatial object has the coordinate information of spatial data related with the non-spatial data. Spatial clustering is used to partition the areas the user wants to search [2, 3, 9, 10]. We use DBSCAN as the density-based spatial clustering module to group spatial areas for spatial characterization since it can be used as a basic tool for spatial analysis [9, 10]. DBSCAN merges regions with sufficient high density into clusters. The proposed method supports not only a point type but also a polygon type. We extend DBSCAN with minmax distance function to support two types of spatial data. Let object be a point data or a polygon data. In order for objects to be grouped, there must be at least a minimum number of objects called MinObj in ε-neighbourhood, Ne(p), from an

518

C.-M. Ahn et al.

object p, given a radius ε where p and q are objects and D is a data set. If p and q are point type objects, their distance can be calculated by Euclidian distance function [9]. Otherwise, their distance can be calculated by minmax distance function [11]. The minmax distance function is to compute the minimum value of all the maximum distances between objects. Ne( p) = {q ∈ D | dist ( p, q) ≤ ε }

(1)

Fig. 2 illustrates clustering example of polygon type objects with respect to the map of the province in Incheon city. To cluster polygon type objects, we calculate MBR of each polygon object. After that, we calculate the minmax distance values between the center of selected MBR and other MBRs. The remaining step is the same as the clustering process of the point type objects.

Fig. 2. Clustering example of polygon type object

Fig. 3 illustrates the extended DBSCAN algorithm. For this clustering algorithm, we need some parameters. Let SetOfbject be a set of spatial data, NoOfClusters be the given number of clusters, and r be a radius of ε-neighborhood. The complexity of the expended DBSCAN algorithm can be expressed by O(n2), where n is the number of objects.

3 Design and Implementation of Spatial Characterization System In this section, we describe each component of the proposed spatial characterization system and an application example. The proposed spatial characterization system consists of spatial data mining query processor, spatial data mining module, spatial database, and spatial clustering module. We use a SMQL query language for spatial data mining. SMQL is a spatial data mining language used in GMS spatial database system [13]. Example 1 shows an SMQL query of an application for the proposed system. Taskrelevant data in Table.1 are intermediate query results using Example 1.

Effective Spatial Characterization System Using Density-Based Clustering

519

Algorithm 2. Extended density-based clustering algorithm Given : All data in D are unclassified Input Parameter : SetOFObjects, NoOfClusters, ClassifiedSetOfObject , radius r, MinObj Method : 1. Make ClassifiedSetOfObjects empty. 2. Initially, choose any object p from SetOfObjects and select it as a seed object. 3. Search density reachable objects from the seed object with respect to the radius r and add them into the neighborhood. To find densityreachable object, use Euclidean distance function in the case of a point data type or use the minmax distance function in the case of a polygon data type. 4. Choose randomly any object within neighborhood satisfying the core object condition and select it as a new seed object. 5. Repeat 3 to 4 until the number of density reachable objects is less than MinObj. 6. Move neighborhood objects from SetOfObjects to ClassifiedSetOfObjects. 7. Repeat 2 to 6 until no more objects in SetOfObjects exist or the generated number of neighborhood is greater than or equal to NoOfClusters.

Fig. 3. Extended density-based clustering algorithm

Example 1. SMQL query for characterization with respect to annual_income, education, and age of women who reside in Incheon city. MINE Characterization as woman_pattern USING HIERARCHY H_inome, H_education, H_age USING Clustering distance 30 FOR annual_income, education, age FROM census WHERE province = "InCheon", gender = "F" SET distinct_value threshold 20 [Step1] Given a SMQL query statement in Example.1, the system collects annual_income, education, age, and coordinate information attribute values of taskrelevant data from census table in the spatial database. Please refer to [13] for SMQL query processor. [Step 2] Clustering is performed by using coordinate information, which is stored in object pointer attribute, with respect to spatial task-relevant data. [Step3] In this step, low level attribute values are replaced with matching high level attribute value in the concept hierarchy. The system generalizes task-relevant data in Table.1 with respect to annual_income attribute. This generalization will make the high opportunity of aggregation task.

520

C.-M. Ahn et al.

[Step4] Characterization that aggregates all tuples is performed. Table 2 shows the aggregated data as the result of SMQL query shown in Example 1. Table 1. Task-relevant data

id 1 2 3 4 …

Annual_income 2580 2400 3850 1780 …

age 45 40 36 27 …

education Univ Senior Master Univ …

object





… … … … … …

Table 2. A result of spatial characterization

Cluster C1 C3 C1 …

annual_income Middle Middle Middel high …

Age middle middle middle …

education Higher Secondary Higher …

count 481 316 156 …

4 Evaluation and Result We use a real map in the province of Incheon city. The data have 8000 objects which consisting of points and polygons. We perform an experiment and evaluate the result using entropy. Entropy [1, 12] is used to measure the purity of target data in information theory. When the data set S is classified into c subsets, the entropy of S is defined as follows: c

Entropy(S ) = −∑ pi log 2 pi

(2)

i =1

where S is a selected data set, pi is the ratio that the set S includes class i, and C is the number of set that is distinct from the set S. The average weight of an attributes is defined as: W = wni / T

(3)

where wni is the weight of ni, ni represents randomly selected attribute, and T is the total weight. In order to measure whether the result of spatial characterization reflects well data distribution in the database, the entropy and weight of attribute is used. Thus, the information gain from the result of spatial characterization using the entropy and the weight of each attribute is defined as follows: Gain(G ) = E − Wa Ea + Wb Eb

(4)

Effective Spatial Characterization System Using Density-Based Clustering

521

where E is a total entropy, a is a data set, b is another data set distinct from a, Wa is an average weight of a, Wb is an average weight of b, Ea is an entropy of a, and Eb is an entropy of b, respectively. We apply the information gain to the sample data used in application example 1 mentioned in section 3. Characterization without clustering denotes the previous method [7,9] using only data generalization while characterization with clustering denotes the proposed method. Fig 4 shows the experimental result that compares characterization with clustering and characterization without clustering in terms of the information gain.

1.2 1 0.8

ina 0.6 G 0.4 0.2 0

Annual_income

Characterization without clustering

Age

Education

Characterization with clustering

Fig. 4. Result of two spatial characterization using information gain

The experimental result demonstrates that the characterization with clustering is more effective than that without clustering with respect to annual income, age, and education attributes, respectively.

5 Conclusion We propose a new spatial characterization method that generalizes spatial objects and groups them by using DBSCAN density-based clustering as automatically selected areas, and aggregates them on non-spatial attributes. The proposed spatial characterization method has the properties as follows: first, we use a density based clustering to automatically select search scope. Second, the method can eliminate unnecessary spatial objects using attribute removal and concept hierarchy generalization operation. The elimination of unnecessary spatial objects can give us useful knowledge, where information gain is high. The experimental result demonstrates that the performance of the proposed characterization method is better than that of the previous characterization method.

522

C.-M. Ahn et al.

Acknowledgement This research was supported by the MIC(Ministry of Information and Communication), Korea, under the ITRC(Information Technology Research Center) support program supervised by the IITA(Institute of Information Technology Assessment).

References 1. Baeza-Yaters, R., Ribeiro-Neto, B.: Modern Information Retrieval. ACM Press (2000) pp.144-149 2. Edwin, M., Knorr, Raymond, Ng.: Robust Space Transformations for Distance-based Operations. SIGKDD, San Francisco, California USA 2 (2001) pp.126-135 3. E., Shaffer, and M., Garland.: Efficient Adaptive Simplification of Massive Meshes. In Proceedings of IEEE Visualization 2001 (2001) pp.127-134 4. J., Amores, and P., Radeva.: Non-Rigid Registration of Vessel Structures in IVUS Images, Iberian Conference on Pattern Recognition and Image Analysis. Puerto de Andratx, Mallorca, Spain, Lecture Notes in Computer Science, Springer-Verlag 2652 (2003) pp.45-52 5. J., Han, and M., Kamber.: Data Mining Concept and Techniques. Morgan Kaufman (2001) pp.130-140 6. J., Han, Y., Cai, and N., Cercone.: Knowledge Discovery in Databases : An AttributeOriented Approach. Proceedings of the 18th VLDB Conference. Vancouver, British Columbia, Canada (1992) pp.547-559 7. J., Han, K., Koperski, and N., Stefanovic.: GeoMiner : A system prototype for spatial data mining, Proceedings of 1997 ACM-SIGMOD International Conference on Management of Data 26 2 (1997) pp.553-556 8. M., Ester, H., -P., Kriegel, and J., Sander.: Knowledge discovery in large spatial databases : Focusing Techniques for Efficient Class Identification. In proc. 4th Intl. Symp. on Large Spatial Databases 951 (1995) pp.67-82 9. M., Ester, H., -P., Kriegel, and J., Sander.: Algorithms and applications for spatial data mining. Geographic Data Mining and Knowledge Discovery, London: Taylor and Francis (2001) pp.160-187 10. M., Ester, H., -P., Kriegel, J., Sander, and X., Xu.: A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In Proc. of ACM SIGMOD 3rd International Conference on Knowledge Discovery and Data Mining. AAAI Press (1996) pp.226-231 11. N., Roussopoulos, S., Kelley, and F., Vincent.: Nearest Neighbor Queries. In Proc. of ACM SIGMOD Intl. Conf. on Management of Data, San Jose, CA (1995) pp.71-79 12. N., Roy, and C., Earnest.: Dynamic Action Space for Information Gain Maximization in Search and Exploration. In Proc. of American Control Conference, Minneapolis 12 (2006) pp.1631-1636 13. S., Park, S.H., Park, C.M., Ahn, Y.S., Lee, J.H., Lee,: Spatial Data Mining Query Language for SIMS. Spring conference of The Korea Information Science Society 31 1 (2003) pp.70-72

MTF Measurement Based on Interactive Live-Wire Edge Extraction Peng Liu1,2,3, Dingsheng Liu1, and Fang Huang1,3 1 2

China Remote Sensing Satellite Ground Station, #45, Bei San Huan Xi Road, Beijing, China Institute of electronic Chinese academy of science, #19, Bei Si Huan Xi Road, Beijing, China {pliu, dsliu, fhuang}@ne.rsgs.ac.cn 3 Graduate University of Chinese Academy of Sciences

Abstract. When we want to measure parameters of the Modulation Transfer Function (MTF) from remote sensing image directly, the sharp edges are usually used as targets. But for noise, blur and the complexity of the images, fully automatic locating the expected edge is still an unsolved problem. This paper improves the semi-auto edge extraction algorithm of live-wire [1] and introduces it into the knife-edge method [8] of MTF measuring in remote sensing image. Live-wire segmentation is a novel interactive algorithm for efficient, accurate, and reproducible boundary extraction that requires minimal user input with a mouse. The image is transformed into a weighted graph with variety restrictions. Edge searching is based on dynamic programming of Dijkstra’s algorithm [5]. Optimal boundaries are computed and selected at interactive rates as the user moves the mouse starting from a manually specified seed point. In this paper, a promoted model of live-wire is proposed to measuring the on orbit Modulation Transfer Function for high spatial resolution imaging satellites. We add the no-linear diffusion filter in the local cost function to ensure the accurateness of the extraction of edges. It can both de-noise and do not affect the shape of the edges when we extracting the edges, so that the calculation of the MTF is more reasonable and precise. Keywords: MTF measurement, interactive Live-wire edge extraction, sharp edge.

1 Introduction In order to measure the on orbit MTF of remote sensing images, knife-edges method makes use of special targets for evaluating spatial response since the targets stimulate the imaging system at all spatial frequencies [8]. The algorithm must determine edge locations with very high accuracy, such as Figure 1.1(a). The ESF (edge spread function) was then differentiated to obtain the LSF (line spread function) as in the second picture in Figure 1.1(d). Then the LSF was Fourier-transformed and normalized to obtain the corresponding MTF, see Figure 1.1 (e). So like Figure1.2 (a) and (b), we hope that we can extract the edges arbitrarily and like figure1.2 (c), we can acquire the perpendicular profiles easily. But for the Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 523–530, 2007. © Springer-Verlag Berlin Heidelberg 2007

524

P. Liu, D. Liu, and F. Huang

complexity of the images, fully automated locating the expected edge is still an unsolved problem. Especially, in most case the edge in the image could be not straight or regular. So the aim to cut a profile that is perpendicular to the edge is also very difficult. As far as above reasons are considered, we introduce “live-wire” [1] into MTF measurement and promote the novel method of edges detecting. Live-wire is one of active contour [2] model for efficient, accurate boundary extraction. Optimal boundaries are computed and selected at interactive rates as the user moves the mouse starting from a manually specified seed point. In this paper we enhance the performance of the live-wire model and make it more fitful to MTF measurement in the remote sensing images.

(a)

(b)

(c)

(d)

(e)

Fig. 1.1. The process from the sharp edges to MTF

2 Live-Wire Model The motivation behind the live-wire algorithm is to provide the user with full control over the edge extraction [2]. Initially, the user clicks to indicate a starting point on the desired contour, and then as the mouse is moved it pulls a “live-wire” behind it along the contour. When the user clicks again, the old wire freezes on the contour, and a new live one starts from the clicked point. The livewire method poses this problem as a search for shortest paths on a weighted graph. Thus the livewire algorithm has two main parts: first, the conversion of image information into a weighted graph, and then the calculation of shortest paths in the graph [6],[9]. This paper for the first time introduces the algorithm into MTF measurement and makes some good promotions. In the first part of the livewire algorithm, weights are calculated for each edge in the weighted graph, creating the image forces that attract the livewire. To produce the weights, various image features are computed in a neighborhood around each graph edge, and then combined in some user-adjustable fashion. The general purpose of the combination is edge localization, but individual features are generally chosen as in [9]. First, features such as the gradient and the Laplacian zero-crossing [3] have been used for edge detection. Second, directionality, or the direction of the path should be taken consideration locally. Third, training is the process by which the user indicates a preference for a certain type of boundary. For resisting the affections of noise, this paper makes promotion by adding the no-linear diffusion filter term, which we can see in the following contents.

MTF Measurement Based on Interactive Live-Wire Edge Extraction

525

In the second part of the livewire algorithm, Dijkstra’s algorithm [5] is used to find all shortest paths extending outward from the starting point in the weighted graph. The livewire is defined as the shortest path that connects two user-selected points (the last clicked point and the current mouse location). This second step is done interactively so the user may view and judge potential paths, and control the extraction as finely as desired. And this will make MTF measurement easier. 2.1 Weighted Map and Local Cost The local costs are computed as a weighted sum of these component functions, such as Laplacian zero-Crossing, Gradient Magnitude and Gradient Direction. Letting l ( p, q ) represents the local cost on the directed link from pixel p to a neighboring pixel q , the local cost function is

l ( p, q ) = wZ . f Z (q ) + wD . f D ( p, q ) + wG . f G (q ) + wdiv f div ( p )

(1)

f div ( p ) is Laplacian zero-Crossing, f G (q ) is Gradient Magnitude, f D ( p , q ) is Gradient Direction and f div ( p ) is Divergence of Unit Gradient. wZ , wD , wG and wdiv are their weights, and each w is the weight of the corresponding In formula (1),

feature function. The Laplacian zero-crossing is a binary edge feature used for edge localization [7]. The Laplacian image zero-crossing corresponds to points of maximal (or minimal) gradient magnitude. Thus, Laplacian zero-Crossings represent “good” edge properties and should therefore have a low local cost. If I L (q) is the Laplacian of an image I at the pixel q , then

⎧ 0 if I L (q ) = 0 f Z (q ) = ⎨ ⎩1 if I L (q ) ≠ 0

(2)

Since the Laplacian zero-crossing creates a binary feature, f Z (q ) does not distinguish between strong, high gradient edges and weak, low gradient edges. However, gradient magnitude provides a direct correlation between edge strength and local cost. If I x and I y represent the partials of an image I in x and y respectively, then the gradient magnitude G is approximated with G = I x2 + I y2 . Thus, the gradient component function is

fG =

max (G − min( G ) ) − (G − min( G )) G − min( G ) = 1− max (G − min( G ) ) max (G − min( G ) )

(3)

The gradient direction f D ( p, q ) adds a smoothness constraint to the boundary by associating a high cost for sharp changes in boundary direction. The gradient direction is the unit vector defined by I x and I y . Letting D ( p ) be the unit vector perpendicular

526

P. Liu, D. Liu, and F. Huang

to the gradient direction at the point

p (for D ( p) = ( I y ( p ),− I x ( p )) ), the

formulation of the gradient direction feature cost is:

f D ( p, q ) =

{cos[D( p ) • L( p, q )] π 1

−1

+ cos[L( p , q ) • D( p )]

−1

}

⎧⎪ D( p ) = (I y ( p ) − I x ( p )) ⎨ ⎪⎩ D(q ) = (I y (q ) − I x (q ))

L( p , q ) =

1 ⎧ q − p; ⎨ p − q ⎩ p − q;

⎛ ∂ ⎛ ⎜ f div ( p ) = − ⎜ ⎜ ∂x ⎜ ⎝ ⎝

Ix I

2

x

+I

2

y

if if

(4)

(5)

D( p ) • (q − p ) ≥ 0

D( p ) • (q − p ) ≺ 0

(6)

⎞⎞ ⎟⎟ + β ⎟⎠ ⎟⎠

(7)

⎞ ∂ ⎛ ⎟+ ⎜ ⎟ ⎜ y ∂ +β ⎠ ⎝

Iy I

2

x

+I

2

y

Above, in D ( p ) • L( p , q ) of (4), “ • ” is vector dot products. And here, (5) and (6) are the bi-directional links or edge vector between pixels p and q . The link is either horizontal, or vertical, or diagonal (relative to the position of q in p ’s neighborhood). The dot product of D( p ) and L( p, q ) is positive, as noted in [6]. The direction feature cost is low when the gradient directions of the two pixels are similar to each other and the link between them. Here, f div ( p ) is the divergence of unit gradient vector in the point of p . And

wdiv is the weight of the term. Its function is to de-noise. There is not this term in the original model [6]. In order to de-noise we add this term to the model. In (7), β is small positive constant that prevent I 2 x + I 2 y + β to be zero. This term comes from the no-linear diffusion filter, first proposed by [4]. And it has been successful as a de-noise algorithm. f div ( p ) is sensitive to the oscillating such as noise but not penalize the step edges. So, in location of edge f div ( p) is small, and in the location of noise f div ( p) is big. The function of f div ( p) will be demonstrated in the following of the paper. 2.2 Searching for an Optimal Path As mentioned, dynamic programming can be formulated as a directed graph search for an optimal path. This paper utilizes an optimal graph search similar to that presented by Dijkstra [5]. Further, this technique builds on and extends previous boundary tracking methods in 4 important ways same as in [6], but the difference of

MTF Measurement Based on Interactive Live-Wire Edge Extraction

527

our method is that we add the no-linear diffusion filter to the weighted map so that the search can resist the effect of noise. And all these characters can make MTF measurement easier. The live-wire 2-D dynamic programming (DP) graph search algorithm is as follows: Figure 2.2 (a) is the initial local cost map with the seed point blacked. For simplicity of demonstration the local costs in this example are pixel based rather than link based and can be thought of as representing the gradient magnitude cost feature. Figure 2.2 (b) shows a portion of the cumulative cost and pointer map after expanding the seed point. Noticing that the diagonal local costs which have been scaled by Euclidean distance does not show in this figure. In fact we compute the diagonal local costs in our method, but for convenience we do not show them. This is demonstrated in Figure 2.2 (c) where several points have now been expanded, and the seed point and the next lowest cumulative cost point on the active list. In fact, the Euclidean weighting between the seed and diagonal points makes them more costly than nondiagonal paths. Figures 2.2 (d), (e), and (f) show the cumulative cost/direction pointer map at various stages of completion. Note how the algorithm produces a “wave-front” of active points emanating from the initial start point, which is called the seed point, and that the wave-front grows out faster where there are lower costs.

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 2.2. (a)-(f) is the process of optimal path searching from one seed point to another. And the blacked points are the active points on the live paths.

And here, we have to say, the importance of the term of f div ( p) is obvious. In the Figure 2.3, (a) is the edge extraction in the smooth ideal remote sense images. We can see that the result is satisfying. Even the edge is not straight or regular, the live-wire work well in (a). Following, (b) and (c) are the result or edge detection by live-wire model without the term f div ( p) , and they are badly affected by noise. In Figure 2.3(b) and (c), edge extraction is not optimal for us. Where the level of noise is high, the edge extraction is not very accurately. But, (d) is the result of live-wire edge extraction with the term f div ( p) . And we can see that result (d) is not affected by noise, and the boundary is on the right position for the functions of term f div ( p) . The term of f div ( p) is sensitive to noise but almost does not change on the edge. So the term of no-linear diffusion filter operator f div ( p) enhances the performance of the live-wire model very much.

528

P. Liu, D. Liu, and F. Huang

(a)

(b)

(c)

(d)

Fig. 2.3. (a) is the live-wire edge detecting of remote sensor image. (b) and (c) are the livewire edge detecting without f div ( p ) term and (d) is with the f div ( p) term.

3 MTF Measurement Based on Live-Wire Until now, we have acquired the knife-edges, and we must resample the profile that perpendicular to the edges same as [8]. On one edge we can acquire many profiles as Figure 3.1(b) shows. In order to check the accuracy of the algorithm, we use a known PSF to convolve the image of Figure 3.1 (a) and acquire the blur image of Figure 3.1 (b). The ideal MTF of the known PSF has been shown as figure 3.1 (g). Firstly, we

(a)

(b)

(e)

(c)

(f)

(d)

(g)

Fig. 3.1. MTF measurement in ideal situation

select the edge image of Figure 3.1 (b). Then, we use live-wire to search the optimal edge as knife-edge and compute the profiles that perpendicular to the edge as Figure 3.1 (b). Furthermore we use the minimum mean-square [8] value to compute the edge spread function as Figure 3.1 (d). Since the image Figure 3.1 (b) is ideal and simple, the ESF should be very accuracy. And the ESF was then differentiated to obtain the LSF as in the Figure 3.1(e). Then the LSF was Fourier-transformed and

MTF Measurement Based on Interactive Live-Wire Edge Extraction

529

normalized to obtain the corresponding MTF Figure3.1 (f). The ideal MTF of (b) is Figure 3.1 (g). Comparing the Figure3.1 (f) with the Figure3.1 (g), we can find that the live-wire works very well and the result is very precise. There only is a little error in the MTF measurement, and it is out of the cut-off frequency. The amplitude frequency from 0 to 20 in (f) and (g) are almost the same, the error comes out only from 20 to 35. The results do just verify the accuracy of the live-wire method. Figure3.2 (a) and (b) is the same target image of remote sensing, and the (c) and (d) are the MTF result of difference methods. In Figure3.2 (c) and (d), the blue solid line is the MTF that is measured by method in [8]. And in Figure3.2 (c) the red solid line is the MTF that is measured by our algorithm. In Figure3.2 (d) the red solid line is the MTF that is measured by live-wire algorithm without improvement. The target image is irregular, but for the accurate edge extraction and more correct profile cutting, the result of red solid line that is based on our improvement is obviously more precise, see Figure3.2 (c). Further more in Figure3.2 (d), the red solid line is the MTF measured by original live-wire model that does not add the de-noise term of f div ( p) . We can see that for the affection of noise, the edges is not very accurate and it can affect the measurement of MTF in (d) and the error is also obvious. This illustrate the no-improved live-wire algorithm is not very suit for MTF measure.

(a)

(b)

(c)

(d)

Fig. 3.2. MTF measurement based on difference methods

(a)

(b)

(c)

(d)

Fig. 3.3. MTF measurement of variety of remote sensing images

Figrure3.3 is MTF measurement used different images. In these figures we can see that, if the context in the image is complicated and the edge is not very straight we can also extract the edge accurately as Figure3.3 (a). And this success attribute to the advantages of the live-wire algorithm and the no-linear diffusion filter term added in

530

P. Liu, D. Liu, and F. Huang

live-wire model. At last we measure the MTF of the image of Figure3.3 (c) that comes from “Google earth”. And live-wire can snap the edge easily and precisely on the complex image context. The profile that is perpendicular to the edge is cut and LSF is computed. Figure3.3 (d) is the MTF of Figure3.3 (c). Because the quality of the image is good, the result of MTF should be relatively ideal.

4 Conclusions In this paper we propose an improved edge detection model based on live-wire algorithms to measure the MTF of remote sensing images. The no-linear filter term was added. And, these improvements make it more suitable for MTF measurement. It highly enhances the performance of the knife-edge method of MTF measurement and it makes the measurement more convenient and flexible. Then, we use the straight edge no more when we want to measure the MTF of the sensors directly. Furthermore, the influence of the noise is restrained after no-linear diffusion filter term is added to the weight map. The profiles that are perpendicular to edge can be simultaneously and accurately computed. So, all these advantages help us measure the MTF of the image more accurately in very complicated image context. The following work will focus on making use of the improved MTF parameters in de-convolution of remote sensing images.

References 1. W. A. Barrett and E. N. Mortensen. Interactive live-wire boundary extraction. Medical Image Analysis, 1(4):331–341, 1997. 2. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active Contour Models,” in Proc. of the First Int. Conf. on Computer Vision, London, England, pp. 259-68, June 1987. 3. E. N. Mortensen and W. A. Barrett. Intelligent Scissors for Image Composition. Computer Graphics (SIGGRAPH ‘95), 191–198, 1995. 4. P. Perona, J. Malik, “Scale-space and edge detection using anisotropic diffusion”, PAMI 12(7), pp. 629-639, 1990. 5. E. W. Dijkstra A Note On Two Problems in Connexion With Graphs Numerische Mathematik, 269–271, 1959. 6. A. X. Falcao, J. K. Udupa, and F. K. Miyazawa. An Ultra-Fast User-Steered Image Segmentation Paradigm: Live-wire on the Fly. IEEE Transactions on Medical Imaging, 19(1):55–61, 2000. 7. A. X. Falcao, J. K..Udupa, S. Samarasekera, and S. Sharma. User-Steered Image Segmentation Paradigms: Live-wire and Live Lane. Graphical Models and Image Processing, 60:233–260, 1998. 8. Taeyoung Choi, IKONOS Satellite on Orbit Modulation Transfer Function (MTF) Measurement using Edge and Pulse Method, thesis South Dakota State University.2002 9. E. N. Mortensen and W. A. Barrett. Interactive Segmentation with Intelligent Scissors. Graphical Models and Image Processing, 60(5):349–384, 1998.

Research on Technologies of Spatial Configuration Information Retrieval Haibin Sun College of Information Science and Engineering, Shandong University of Science and Technology, Qingdao 266510, China Offer [email protected]

Abstract. The problem of spatial configuration information retrieval is a Constraint Satisfaction Problem (CSP), which can be solved using traditional CSP algorithms. But the spatial data can be reorganized using index techniques like R-tree and the spatial data are approximated by their Minimum Bounding Rectangles (MBRs), so the spatial configuration information retrieval is actually based on the MBRs and some special techniques can be studied. This paper studies the mapping relationships among the spatial relations for real spatial objects, the corresponding spatial relations for their MBRs and the corresponding spatial relations between the intermediate nodes and the MBRs in R-tree.

1

Introduction

Spatial configuration retrieval is an important research topic of content-based image retrieval in Geographic Information System (GIS), computer vision, and VLSI design, etc. A user of a GIS system usually searches for configurations of spatial objects on a map that match some ideal configuration or are bound by a number of constraints. For example, a user may be looking for a place to build a house. He wishes to have a house A north of the town that he works, in a distance no greater than 10km from his child’s school B and next to a park C. Moreover, he would like to have a supermarket D on his way to work. Under some circumstances, the query conditions cannot be fully satisfied at all. The users may need only several optional answers according to the degree of configuration similarity. Of the configuration similarity query problem, the representation strategies and search algorithms have been studied in several papers[1,2,3]. In the real world, spatial data often have complex geometry shapes. It will be very costly if we directly to calculate the spatial relationships between them, while much invalid time may be spent. If N is the number of spatial objects, and n the number of query variables, the total number of possible solutions is equal to the number of n-permutations of the N objects: N !/(N − n)! . Using Minimum Bounding Rectangles (MBRs) to approximate the geometry shapes of spatial objects and calculating the relations between rectangles will reduce the calculation greatly. So we can divide the spatial configuration retrieval into two steps: firstly the rectangle combinations for which it is impossible to satisfy the query Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 531–537, 2007. c Springer-Verlag Berlin Heidelberg 2007 

532

H. Sun

conditions will be eliminated, and then the real spatial objects corresponding to the remaining rectangle combinations will be calculated using computational geometry techniques. To improve the retrieval efficiency, the index data structure which is called R-tree[4]or the variants R+-tree[5] and R*-tree[6] can be adopted. The next section takes topological and directional relations as examples to study the mapping relationships between the spatial relationships for MBRs and the corresponding relationships for real spatial objects; the last section concludes this paper.

2

Spatial Mapping Relationships

This paper mainly concerns the topological and directional relations for MBRs and the corresponding spatial relationships for real spatial objects. The ideas in this paper can be applied to other relationships such as distance and spatiotemporal relations, etc. 2.1

Topological Mapping Relationships

This paper focuses on RCC8[7] (see Fig.1) relations and studies the mapping relationship between the RCC8 relations for real spatial objects and the RCC8 relations for the corresponding MBRs. Let p and q be two real spatial objects, p’ and q’ be their corresponding MBRs. If the spatial relation between p and q is PO (Partly Overlap), then the possible spatial relation between p’ and q’ is PO(Partly Overlap) or TPP (Tangential Proper Part) or NTPP (Non-Tangential Proper Part) or EQ (Equal) or TPPi (inverse of Tangential Proper Part) or NTPPi (inverse of Non-Tangential Proper Part) which can be denoted by the

x y x DC y x y x PO y

x y

x y

y x

x EC y x TPP y x TPPi y xy

x y

y x

x EQ y x NTPP y x NTPPi y

Fig. 1. Two-dimensional examples for the eight basic relations of RCC8

Research on Technologies of Spatial Configuration Information Retrieval

533

disjunction form PO(p’, q’) TPP(p’, q’) NTPP(p’, q’) EQ(p’, q’) TPPi(p’, q’) NTPPi(p’, q’). To use R-tree to improve the efficiency of the spatial configuration retrieval, the topological relations in the query condition should first be transformed to the corresponding topological relations for the MBRs, which can be used to eliminate the rectangle combinations that cannot fulfill the constraints from the leaf nodes in the R-tree. The intermediate nodes in the R-tree can also be used to fast the retrieval process. Let p” be the rectangle that enclose p’, i.e. the parent node of leaf node p’ in the R-tree, which is called intermediate node. Given the spatial relation between p’ and q’, the spatial relation between p” and q’ can be derived. For example, from the spatial relation TPP(p’, q’), the spatial relation PO(p”,q’) TPP(p”,q’) EQ(p”,q’) TPPi(p”,q’) NTPPi(p”,q’) can be obtained. It is very interesting that the parents of the intermediate nodes also have the same property. Table 1 presents the spatial relations between two real spatial objects, the possible spatial relations that their MBRs satisfy and the possible spatial relations between the corresponding intermediate node and the MBR. Table 1. The spatial relations between two real spatial objects, the possible spatial relations that their MBRs satisfy and the possible spatial relations between the corresponding intermediate node and the MBR RCC8 relation RCC8 relation between between p and q MBRs p’ and q’ DC(p,q) DC(p’,q’) ∨ EC(p’,q’) ∨ PO(p’,q’) ∨ TPP(p’,q’) ∨ NTPP(p’,q’) ∨ EQ(p’,q’) ∨ TPPi(p’,q’) ∨ NTPPi(p’,q’) EC(p,q) EC(p’,q’) ∨ PO(p’,q’) ∨ TPP(p’,q’) ∨ NTPP(p’,q’) ∨ EQ(p’,q’) ∨ TPPi(p’,q’) ∨ NTPPi(p’,q’) PO(p,q) PO(p’, q’) ∨ TPP(p’, q’) ∨ NTPP(p’,q’) ∨ EQ(p’,q’) ∨ TPPi(p’, q’) ∨ NTPPi(p’, q’) TPP (p,q) TPP(p’, q’) ∨ NTPP(p’, q’) ∨ EQ(p’, q’) NTPP (p,q) TPPi (p,q) NTPPi (p,q) EQ(p,q)

RCC8 relation between p” and q’ PO(p”,q’) ∨TPP(p”,q’) ∨ NTPP(p”,q’) ∨ EQ(p”,q’) ∨TPPi(p”,q’)∨ NTPPi(p”,q’) ∨ EC(p”,q’) ∨ DC(p”,q’) EC(p”,q’) ∨ TPP(p”,q’) ∨ EQ(p”,q’) ∨ NTPPi(p”,q’)

PO(p”,q’) NTPP(p”,q’) TPPi(p”,q’)

∨ ∨ ∨

PO(p”, q’) ∨ TPP(p”, q’) ∨ NTPP(p”, q’) ∨ EQ(p”, q’) ∨ TPPi(p”, q’) ∨ NTPPi(p”, q’)

PO(p”, q’) ∨ TPP(p”, NTPP(p”, q’) ∨ EQ(p”, TPPi(p”, q’) ∨ NTPPi(p”, NTPP(p’, q’) PO(p”, q’) ∨ TPP(p”, NTPP(p”, q’) ∨ EQ(p”, TPPi(p”, q’) ∨ NTPPi(p”, EQ(p’, q’) ∨ TPPi(p’, q’) EQ(p”, q’) ∨ TPPi(p”, ∨ NTPPi(p’, q’) NTPPi(p”, q’) NTPPi(p’, q’) NTPPi(p”, q’) EQ(p’, q’) EQ(p”, q’) ∨ TPPi(p”, NTPPi(p”, q’)

q’) q’) q’) q’) q’) q’) q’)

∨ ∨ ∨ ∨ ∨

q’) ∨

534

H. Sun

Based on the above mapping relationship and the R-tree, the candidate MBR combinations can be retrieved efficiently, and then a refinement step is needed to derive the spatial relations among the real spatial objects that the MBRs enclose, which means that the spatial relation between p and q should be derived from the spatial relation between p’ and q’. From the spatial relation between two MBRs, we can derive several possible spatial relations or only one definite spatial relation between two real spatial objects that the MBRs enclose. In the former case the complex geometry computation will be applied whereas it will be omitted in the latter case. For example, given the spatial relation NTPPi(p’, q’), we can derive DC(p, q)∨ EC(p, q)∨ PO(p, q)∨ NTPPi (p, q)∨ TPPi (p, q), the geometry computation must be adopted to ascertain the spatial relation between p and q. But if we know the spatial relation DC(p’, q’), then spatial relation DC(p, q) can be derived directly. 2.2

Direction Mapping Relationships

According to Goyal and Egenhofer’s cardinal direction model[?], there are 9 atomic cardinal direction relations(O, S, SW, W, NW, N, NE, E, SE) (see Fig.2) and totally 218 cardinal direction relations for non-empty connected regions in the Euclidean space 2 (illustrated by 3 × 3 matrix, see Fig.3) [8]. There are 36 cardinal direction relations for the non-empty and connected regions’MBRs: O, S, SW, W, NW, N, NE, E, SE, S:SW, O:W, NW:N, N:NE, O:E, S:SE, SW:W, O:S, E:SE, W:NW, O:N, NE:E, S:SW:SE, NW:N:NE, O:W:E, O:S:N,SW:W:NW,NE:E:SE, O:S:SW:W, O:W:NW:N, O:S:E:SE, O:N:NE:E, O:S:SW:W:NW:N, O:S:N:NE:E:SE, O:S:SW:W:E:SE, O:W:NW:N:NE:E, O:S:SW:W:NW:N:NE:E:SE(see Fig.4). This kind of cardinal

NWA

NA

NEA

B WA

SWA

OA

EA

A SA

SEA

Fig. 2. Capturing the cardinal direction relation between two polygons, A and B, through the projection-based partitions around A as the reference object

Research on Technologies of Spatial Configuration Information Retrieval

535

Fig. 3. 218 cardinal direction relations between two non-empty and connected regions[8]

direction relation has the rectangle shape, so it is also named rectangle direction relation, otherwise it is called non-rectangle direction relation. In the following, we study the mapping relationships between the cardinal direction relations for real spatial objects and the cardinal direction relations for the corresponding MBRs. First of all, we give a definition as follows. Definition 1. a cardinal direction relation R contains another cardinal direction relation R’, if all the atomic relations in R’ also exist in R. The mapping relationships from the cardinal direction relations for real spatial objects to the ones for their MBRs can be described using the following theorems. Theorem 1. if the cardinal direction relation between the real spatial objects p and q is rectangle direction relation R(see Fig.4), the cardinal direction relation between their MBRs p’ and q’ is also R; if the cardinal direction relation between the real spatial objects p and q is non-rectangle direction relation R, the cardinal direction relation between their MBRs p’ and q’ is the rectangle direction relation R’ in Fig.4 which contains relation R and has the minimum area. Theorem 1 can be derived by combining Fig.3 and Fig.4. Assume that the cardinal direction relation between two real spatial objects p and q is N:NW:W which obviously is not rectangle direction relation, from Fig.4 the rectangle direction relation that contains N:NW:W and has the minimum rectangle area is O:W:NW:N, so the cardinal direction relation between two MBRs p’ and q’ is O:W:NW:N.

536

H. Sun

O

S

SW

W

NW

N

NE

E

SE

S:SW

O:W

NW:N

SW:W

O:S

E:SE

N:NE

O:E

S:SE

W:NW

O:N

NE:E

S:SW:SE

NW:N:NE

O:W:E

SW:W:NW

NE:E:SE

O:S:SW:W

O:W:NW:N

O:S:E:SE

O:S:SW:

O:S:N:NE: E:SE

O:S:SW: W:E:SE

O:W:NW:

W:NW:N

O:S:N

O:N:NE:E

N:NE:E

O:S:SW:W:NW N:NE:E:SE

Fig. 4. 36 cardinal direction relations for MBRs

Similarly the mapping relationships from the cardinal direction relations for MBRs to the ones for the possible real spatial objects can be described as follows. Theorem 2. if the cardinal direction relation R between two MBRs p’ and q’ contains no more than 3 atomic cardinal direction relations (including 3), the corresponding cardinal direction relation between the real spatial objects p and q is also R; otherwise, the possible cardinal direction relations between p and q will be the subsets of relation R which can be transformed to relation R when p and q are approximated by p’ and q’. For example, if the cardinal direction relation between two MBRs is S:SW:SE(including three atomic relations:S,SW,SE), then the cardinal direction relation between the corresponding two real spatial objects definitely is S:SW:SE. If the cardinal direction relation between two MBRs is O:S:SW:W, the possible cardinal direction relations between two real spatial objects include O:W:SW, W:O:S,SW:S:O,SW:S:W and O:S:SW:W. Given the cardinal direction relation between the MBRs p’ and q’, the cardinal direction relation between p”, which is the parent node of p’ in R-tree, and q’ can be described using the following theorem. Theorem 3. if the cardinal direction relation between MBRs p’ and q’ is R, the possible cardinal direction relations between p” and q’ are the rectangle direction relations containing R.

Research on Technologies of Spatial Configuration Information Retrieval

537

For example, if the cardinal direction relation between p’ and q’ is O:S:SW:W, the possible cardinal direction relations between p” and q’ will be O:S:SW:W, O:S:SW:W:NW:N, O:S:SW:W:E:SE and O:S:SW:W:NW:N:NE:E:SE.

3

Conclusions

This paper has studied the spatial configuration information retrieval problem which includes the mapping relationship among the spatial relations (topological and directional relations) for real spatial objects, the corresponding spatial relations for the corresponding MBRs and the corresponding spatial relations between intermediate nodes and the MBRs in R-tree. Based on these results, search algorithms can be designed to solve the spatial configuration retrieval problems. The research work of this paper is valuable for the information retrieval system related to spatial data.

References 1. Bergman, L. Castelli, V., Li C-S. Progressive Content-Based Retrieval from Satellite Image Archives. D-Lib Magazine, October 1997. http://www.dlib.org/dlib/october97/ibm/10li.html. 2. Gupta A., Jain R. Visual Information Retrieval. Communications of ACM, May 1997, 40(5): 70-79. 3. Orenstein, J. A. Spatial Query Processing in an Object-Oriented Database System. In: Proc. Of the 1986 ACM SIGMOD international conference on Management of data, 1986, pages 326-336. 4. Guttman, A. R-trees: A Dynamic Index Structure for Spatial Searching. In: Proc. Of ACM SIGMOD, 1984, pages 47-57. 5. Timos K. Sellis, Nick Roussopoulos, Christos Faloutsos. The R+-Tree: A Dynamic Index for Multi-Dimensional Objects. In: Proceedings of 13th International Conference on Very Large Data Bases, September 1-4, 1987, Brighton, England, pages 507-518. 6. Norbert Beckmann, Hans-Peter Kriegel, Ralf Schneider, Bernhard Seeger. The R*tree: An Efficient and Robust Access Method for Points and Rectangles. In: Proceedings of the ACM SIGMOD, 1990, pages 322-331. 7. D. A. Randell, Z. Cui and A. G. Cohn. A Spatial Logic Based on Regions and Connection. In: Proc. 3rd Int. Conf. on Knowledge Representation and Reasoning, Morgan Kaufmann, San Mateo, 1992, pages 165-176. 8. S.Cicerone and P. Di Felice. Cardinal directions between spatial objects: the pairwise-consistency problem. Information Sciences. 2004, 164(1-4): 165-188.

Modelbase System in Remote Sensing Information Analysis and Service Grid Node Yong Xue1,2, Lei Zheng1,3, Ying Luo1,3, Jianping Guo1,3, Wei Wan1,3, Wei Wei1,3, and Ying Wang1,3 1

State Key Laboratory of Remote Sensing Science, Jointly Sponsored by the Institute of Remote Sensing Applications of Chinese Academy of Sciences and Beijing Normal University, Institute of Remote Sensing Applications, Chinese Academy of Sciences, P.O. Box 9718, Beijing 100101, China 2 Department of Computing, London Metropolitan University, 166-220 Holloway Road, London N7 8DB, UK 3 Graduate School, Chinese Academy of Sciences, Beijing 100049, China [email protected]

Abstract. In this article we describe a modelbase system used in Remote Sensing Information Analysis and Service Grid Node (RISN) at Institute of Remote Sensing Applications (IRSA), Chinese Academy of Sciences (CAS). The goal of the Node is to make good use of physically distributed resources in the field of remote sensing science such as data, models and algorithms, and computing resource left unused on Internet. With the modelbase system, we can organize and manage models better and make full use of models. With this system, we update both local and remote models easily, and we can also add remote modelbase into our modelbase management system. At the same time, we provide interfaces to access and run models from our node. Implementing it, we use Oracle to organize and manage models, use java language to connect with Oracle database and run models on Condor platform. Keywords: Modelbase system, Remote sensing information analysis and Service Grid Node, modelbase management system.

1 Introduction The research of model management theory began 80th last century. In 1980 Blanning (1980) first imported the notion of modelbase, and designed model query language (MQL) like database query language to management models. Geoffrion (1987) designed structural model language (SML), which introduced structural program design into building models. Muhanna et al. (1988) introduced system theory into modelbase management system. Wesseling et al. (1996) designed dynamic model language to support special data structural. Modelbase can be devised into two kinds by its models: graph modelbase whose models are graph and arithmetic modelbase whose models are arithmetic or program. Kuntal et al. (1995) Organized Large Structural Modelbases which were full of graph Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 538–541, 2007. © Springer-Verlag Berlin Heidelberg 2007

Modelbase System in Remote Sensing Information Analysis and Service Grid Node

539

models at 1995, which gave a good example of graph modelbase, but arithmetic modelbase is different according its application. Liu et al. built a simulation supporting system of cold rolling process control system based on modelbase. Li et al. (2002) built water environment models and put them in modelbase. So we present an arithmetic modelbase that is used in remote sensing application and also can connect to Grid environment - Condor platform through RISN at IRSA, CAS, China based on the Condor platform. The node is a special node of Spatial Information Grid (SIG) in China. The node will be introduced in Section 2. The modelbase system and the function of modelbase in the node will be demonstrated in Section 3. Finally, the conclusion and further development will be addressed in Section 4.

2 Remote Sensing Information Analysis and Service Grid Node Remotely sensed data is one of the most important spatial information sources, so the research on architectures and technical supports of RISN is the significant part of the research on SIG. The aim of the Node is to integrate data, traditional algorithm and software, and computing resource distributed, provide one-stop service to everyone on Internet, and make good use of everything pre-existing. The Node is extendable, which contains many personal computers, supercomputers or other nodes. It also can be as small as just on personal computer. There are two entries to the Node: 1) A portal implemented by JSP. User can visit it through Internet browses, such as Internet Explorer and others; 2) A portal implemented by web service and workflow technology. It is special for SIG. Other Grid resources and Grid systems can be integrated with our node through this portal. The node provides the application services such as aerosol optical depth retrieval, land surface temperature retrieval, soil moisture retrieval, surface reflectance retrieval, and vegetation index applications from MODIS data. To manage models, we used modelbase system in the node. Figure 1 shows the structure of our modelbase system.

3 Modelbase System in Remote Sensing Information Analysis and Service Grid Node Modelbase system has two main parts: modelbase file system and modelbase management system. 3.1 Modelbase File System Modelbase file system is the place where models are stored. In the RISN, models are stored in several computers, remote computers and remote modelbases. All models treated as virtually stored in one computer with modelbase management system. We achieve the distributed management system in this architecture. The system has another benefit: we can add both models separately or in remote modelbase freely but not download models to our computers. In our RISN, we provide Condor platform to run models on Grid system, so the architecture assumes to the option of sharing resources in the Grid computing.

540

Y. Xue et al.

Fig. 1. Information registry of models in remote modelbase

3.2 Modelbase Management System Modelbase management system contains the registry information of models, the interface with Condor platform and the interface with portal. 3.2.1 Registry of Models We import notion of meta-module (Xue et al. 2006) and extends it in our system. In our system, every model is a meta-module, and the information is stored in the table of modelbase. 3.2.2 Updating in Remote Modelbase In our design, we can add remote modelbase expediently but no need to download models to our computer. But in order to add models in remote computer into our modelbase system, the information of models is needed to be registered, and a method is needed to be provided to get models if you want to provide your models to run by others. If you register your modelbase information in our modelbase system, we will provide an interface to connect to your modelbase and make it part of our modelbase. 3.2.3 Models Access and Running on a Grid Computing Platform As our modelbase system is one part of RISN, we provide interface to connect with our portal and workflow. After select the model you want to run, you can run by pressing the “submit” button on the portal, or download model to your own computer according to the information of the model to run. If you choose to run the model on our Remote Sensing Information Analysis and Service Grid Node, we provide some model to run on the Condor platform, you need to input parameter as required.

4 Conclusion and Future Work Grid technology is a very effective method for remotely sensed data processing service. Through it the existing remotely sensed models and algorithms resource can

Modelbase System in Remote Sensing Information Analysis and Service Grid Node

541

be shared seems like the common service in Grid environment. In this article we describe a modelbase system used in Remote Sensing Information Analysis and Service Grid Node which can be used on Condor platform. With the modelbase system, we can organize and manage models better and make full use of models and run models on Condor platform. Any models can be added by any users who are authorized, in the future, models will be added to modelbase only after checked by administrator.

Acknowledgment This publication is an output from the research projects “Multi-scale Aerosol Optical Thickness Quantitative Retrieval from Remotely Sensing Data at Urban Area” (40671142) and "Grid platform based aerosol monitoring modeling using MODIS data and middlewares development" (40471091) funded by NSFC, China and “863 Program - Remote Sensing Information Processing and Service Node” funded by the MOST, China.

References 1. Blanning, R.W.,: How Managers Decide to Use Planning Models. Long Range Planning. 13(1980) 32-35 2. Geoffrion, A.,: An Introduction to Structured Modeling. Management Science. 33:5(1987)547-588 3. Muhanna, W.A., and Roger, A.P.: A System Framework for Model Software Management. TIMS/ORSA meeting, Washington, DC. 4(1988)

Density Based Fuzzy Membership Functions in the Context of Geocomputation Victor Lobo1,2, Fernando Bação1, and Miguel Loureiro1 1

ISEGI - UNL Portuguese Naval Academy {vlobo, bacao, mloureiro}@isegi.unl.pt 2

Abstract. Geocomputation has a long tradition in dealing with fuzzyness in different contexts, most notably in the challenges created by the representation of geographic space in digital form. Geocomputation tools should be able to address the imminent continuous nature of geo phenomena, and its accompanying fuzzyness. Fuzzy Set Theory allows partial memberships of entities to concepts with non-crisp boundaries. In general, the application of fuzzy methods is distance-based and for that reason is insensible to changes in density. In this paper a new method for defining density-based fuzzy membership functions is proposed. The method automatically determines fuzzy membership coefficients based on the distribution density of data. The density estimation is done using a Self-Organizing Map (SOM). The proposed method can be used to accurately describe clusters of data which are not well characterized using distance methods. We show the advantage of the proposed method over traditional distance-based membership functions. Keywords: fuzzy membership, fuzzy set theory, density based clustering, SOM.

1 Introduction One of the most challenging tasks in geocomputation has been the need to provide an adequate digital representation to continuous phenomena such as those typically captured in geographic information. The need to define crisp boundaries between objects in geographic space leads to data representations that, while apparently providing a rigorous description, in reality have serious limitations as far as fuzziness and accuracy are concerned. “There is an inherent inexactness built into spatial, temporal and spatio-temporal databases” largely due to the “artificial discretization of what are most often continuous phenomena” [1]. The subtleties that characterize space and time changes in geo-phenomena constitute a problem as they carry large levels of fuzziness and uncertainty. While fuzziness might be characterized as inherent imprecision which affects indistinct boundaries between geographical features, uncertainty is related to the lack of information [1]. Note that these characteristics are not limited to the spatial representation but also include categorization, and attribute data. All these facts lead to the eminently fuzzy nature of Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 542–549, 2007. © Springer-Verlag Berlin Heidelberg 2007

Density Based Fuzzy Membership Functions in the Context of Geocomputation

543

the data used in geocomputation, or as [2] puts it, “uncertainty is endemic in geographic data”. Rather then ignoring these problems and dismissing them as irrelevant, the geocomputation field should be able to devise ways of dealing with them. In many instances this will translate into attributing uncertainty levels to the representations. This way the user will be aware of the limitations involved in the use of the data, and thus be able to intuitively attribute some level of reliability. Fuzzy Set Theory constitutes a valuable framework to deal with these problems when reasoning and modeling in geocomputation [2]. Fuzzy Set Theory allows partial memberships of entities to concepts with non-crisp boundaries. The fundamental idea is that while it is not possible to assign a particular pattern to a specific class it is possible to define a membership value. In general, the application of automatic fuzzy methods is distance-based. Thus, the (geographic or attribute) distance between a prototype and a pattern defines the membership value of the pattern to the set defined by the prototype. This approach is not only intuitive but also adequate to deal with many different applications. Nevertheless, there is yet another way of approaching the problem, which trades distance for density. In this case, it is not the distance of the pattern to the prototype that governs the membership value, but the pattern density variations between them. This perspective will emphasize discontinuous zones assuming them has potential boundaries. Membership will be a function of the changes in the density in the path between the pattern and the prototype. Thus, if density is constant then the membership value will be high. There are several classical examples in clustering where the relevance of density is quite obvious [3]. In this paper a new method for defining density-based fuzzy membership functions is proposed. The method automatically determines fuzzy membership coefficients based on the distribution density of data. The density estimation is done using a SelfOrganizing Map (SOM). The proposed method can be used to accurately describe data which are not well characterized using distance methods. We show the advantage of the proposed method over traditional distance-based membership functions.

2 Problem Statement Fuzzy Set Theory was introduced in 1965 by Zadeh [4]. A fuzzy set may be regarded as a set with non-crisp boundaries. This approach provides a tool for representing vague concepts, by allowing partial memberships of entities to concepts. In the context of data analysis, entities are usually represented by data patterns and concepts are also represented by data patterns that are used as prototypes. Fuzzy membership may be defined using the following formalism. Given: • a set of n input patterns X = {x1, …, xi, …, xn}, where xi = (xi1, …, xij, …, xid)T ∈ ℜd, and each measure xij is a feature or attribute of pattern xi • a set of k concepts defined by prototypes C={c1, ..., cc, …, ck}, where cc = (cc1, …, ccj, …, ccd)T ∈ ℜd, with k < joint type = " hinge " id = " 1 " > < entities environmentId = " 0 " entityId1 = " 3 " entityId2 = " 2 " anchorEntity = " 2 " / > < anchor xPos = " -2 " yPos = " 5 " zPos = " 0 " / > < axis xDir = " 0 " yDir = " 0 " zDir = " 1 " / > < angles min = " -45 " max = " 0 " / > < activeIF entity = " 1 " isGrabbed = " 1 " / >

< joint type = " hinge " id = " 2 " active = " 0 " > < entities environmentId = " 0 " entityId1 = " 2 " entityId2 = " 1 " anchorEntity = " 2 " / > < anchor xPos = " 4 " yPos = " 0 " zPos = " 0 " / > < axis xDir = " 0 " yDir = " 1 " zDir = " 0 " / > < angles min = " -90 " max = " 90 " / > < deactivateIF jointId = " 2 " angle1GT = " -1 " / > < activateIF jointId = " 1 " angle1LT = " -25 " / >

Listing 1.1. Joint definition of a door

782

6.2

C. Anthes, R. Landertshamer, and J. Volkert

Natural Interaction

To realise natural interaction tracking systems are incorporated. The movement of the users input device is directly mapped on the cursor position and orientation in the VE. The physics module can be used to simulate object properties like gravity. If gravity is used in the physics simulation of the environment one client has to act as a master for the simulation. When an object is dropped gravity has to be calculated and the proper rebound has to be calculated. This type of simulation has to be calculated locally by the master client, who is in controll of the object. The transformations of the matrices have to be transferred over the network. 6.3

Concurrent Object Manipulation

Concurrent object manipulation as described by Broll [12] allows two users to manipulate the same attribute of the same virtual object in real-time. This type of interaction can be extremely useful in construction scenarios or safety applications. Obstacles could be carried away or building material could be arranged. The VR systems mentioned in Section 2 do not support cooperative manipulation. They lock the access to an object to a exclusively to a single user. Broll suggests to combine interaction requests and calculate the resulting object position on one participants site, to keep the system consistent. An alternative approach was developed by Froehlich et al. [16] who incorporate physics to cooperatively manipulate objects during assembly tasks. The approach developed by Froehlich attaches a physically simulated spring between the cursor of the user and the point, where the user grabbed the object. In our case concurrent object manipulation is detected if two transformation on the same object are detected by the transformation manager. In that case a special merging is introduced which can be implemented using Froehlich’s physics approach. The resulting transformation is applied on the object. Since the immediate input transformations from the local user and slightly delayed transformations from the remote user which still can be extrapolated are available it is possible to provide a relatively correct and highly responsive representation of the cooperatively manipulated object.

7

Conclusions and Future Work

This paper has given an introduction on the use of physics simulation for interaction in NVEs. A physics module for the the inVRs framework allows to define joints for interconnection of scene graph nodes. These nodes can be used for highly interactive NVEs. Three types of interaction have demonstrated the use of physics simulation for VEs, especially training scenarios. Advanced methods for synchronsiation of physics have to be found. A similar approach than synchronising particle systems on multiple displays might be used. The distribution of random seeds for some aspects of the physics calculation might help to simulate parts of the VE locally.

Physically-Based Interaction for NVEs

783

References 1. Haller, M., Holm, R., Volkert, J., Wagner, R.: A vr based safety training in a petroleum refinery. In: Annual Conference of the European Association for Computer Graphics (EUROGRAPHICS ’99). (1999) 2. Haller, M., Kurka, G., Volkert, J., Wagner, R.: omvr - a safety training system for a virtual refinery. In: ISMCR ’99, Tokyo, Japan (1999) 3. Tate, D.L., Sibert, L., King, T.: Virtual environments for shipboard firefighting training. In: IEEE Virtual Reality Annual International Symposium (VRAIS ’97), Albuquerque, NM, USA, IEEE Computer Society (1997) 61–68 4. Eberly, D.H.: Game Physics. The Morgan Kaufmann Series in Interactive 3D Technology. Morgan Kaufmann (2004) 5. Baraff, D.: Dynamic Simulation of Non-Penetrating Rigid Bodies. PhD thesis, Department of Computer Science, Cornell University, Ithaca, NY 14853-7501, USA (1992) 6. Carlsson, C., Hagsand, O.: Dive - a multi-user virtual reality system. In: IEEE Virtual Reality Annual International Symposium (VRAIS ’93), Seattle, WA, USA, IEEE Computer Society (1993) 394–400 7. Tramberend, H.: Avocado: A Distributed Virtual Environment Framework. PhD thesis, Technische Fakult¨ at, Universit¨ at Bielefeld (2003) 8. Greenhalgh, C., Benford, S.: Massive: A distributed virtual reality system incorporating spatial trading. In: IEEE International Conference on Distributed Computing Systems (DCS ’95), Vancouver, Canada, IEEE Computer Society (1995) 27–34 9. Singhal, S.K., Zyda, M.J.: Networked Virtual Environments - Design and Implementation. Addison-Wesley Professional (1999) 10. Mine, M.R.: Virtual environment interaction techniques. Tr95-018, University of North Carolina, Chapel Hill, NC 27599-3175 (1995) 11. Roberts, D.J., Wolff, R., Otto, O., Steed, A.: Constructing a gazebo: Supporting teamwork in a tightly coupled, distributedtask in virtual reality. Presence: Teleoperators and Virtual Environments 12 (2003) 644–657 12. Broll, W.: Interacting in distributed collaborative virtual environments. In: IEEE Virtual Reality Annual International Symposium (VRAIS ’95), Los Alamitos, CA, USA, IEEE Computer Society (1995) 148–155 13. Jorissen, P., Lamotte, W.: Dynamic physical interaction platform for collaborative virtual environments. In: CollabTech 2005, Tokyo, Japan (2005) 14. Anthes, C., Volkert, J.: invrs - a framework for building interactive networked virtual reality systems. In: International Conference on High Performance Computing and Communications (HPCC ’06), Munich, Germany, Springer (2006) 894–904 15. Anthes, C., Landertshamer, R., Bressler, H., Volkert, J.: Managing transformations and events in networked virtual environments. In: International MultiMedia Modeling Conference (MMM ’07), Singapore, Springer (2007) 16. Fr¨ ohlich, B., Tramberend, H., Beers, A., Agrawala, M., Baraff, D.: Physically-based manipulation on the responsive workbench. In: IEEE Virtual Reality (VR ’00), New Brunswick, NJ, USA, IEEE Computer Society (2000) 5–12

Middleware in Modern High Performance Computing System Architectures Christian Engelmann, Hong Ong, and Stephen L. Scott Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6164, USA {engelmannc,hongong,scottsl}@ornl.gov http://www.fastos.org/molar

Abstract. A recent trend in modern high performance computing (HPC) system architectures employs “lean” compute nodes running a lightweight operating system (OS). Certain parts of the OS as well as other system software services are moved to service nodes in order to increase performance and scalability. This paper examines the impact of this HPC system architecture trend on HPC “middleware” software solutions, which traditionally equip HPC systems with advanced features, such as parallel and distributed programming models, appropriate system resource management mechanisms, remote application steering and user interaction techniques. Since the approach of keeping the compute node software stack small and simple is orthogonal to the middleware concept of adding missing OS features between OS and application, the role and architecture of middleware in modern HPC systems needs to be revisited. The result is a paradigm shift in HPC middleware design, where single middleware services are moved to service nodes, while runtime environments (RTEs) continue to reside on compute nodes. Keywords: High Performance Computing, Middleware, Lean Compute Node, Lightweight Operating System.

1

Introduction

The notion of “middleware” in networked computing systems stems from certain deficiencies of traditional networked operating systems (OSs), such as Unix and its derivatives, e.g., Linux, to seamlessly collaborate and cooperate. The concept of concurrent networked computing and its two variants, parallel and distributed computing, is based on the idea of using multiple networked computing systems collectively to achieve a common goal. While traditional OSs contain networking features, they lack in parallel and distributed programming models, appropriate system resource management mechanisms, remote application steering and 

This research is sponsored by the Mathematical, Information, and Computational Sciences Division; Office of Advanced Scientific Computing Research; U.S. Department of Energy. The work was performed at the Oak Ridge National Laboratory, which is managed by UT-Battelle, LLC under Contract No. De-AC05-00OR22725.

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 784–791, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Middleware in Modern High Performance Computing System Architectures

785

user interaction techniques, since traditional OSs were not originally designed as parallel or distributed OSs. Similarly, traditional OSs also do not differentiate between various architectural traits, such as heterogeneous distributed or massively parallel. Since the emergence of concurrent networked computing, there have been two different approaches to deal with these deficiencies. While one approach adds missing features to an existing networked OS using middleware that sits inbetween the OS and applications, the other approach focuses on adding missing features to the OS by either modifying an existing networked OS or by developing a new OS specifically designed to provide needed features. Both approaches have their advantages and disadvantages. For example, middleware is faster to prototype due to the reliance on existing OS services, while OS development is a complex task which needs to deal with issues that have been already solved in existing OSs, such as hardware drivers. Software development for high performance computing (HPC) systems is always at the forefront with regards to both approaches. The need for efficient, scalable distributed and parallel computing environments drives the middleware approach as well as the development of modified or new OSs. Well known HPC middleware examples are the Parallel Virtual Machine (PVM) [1], the Message Passing Interface (MPI) [2], the Common Component architecture (CCA) [3], and the Grid concept [4]. Examples for modifications of existing OSs for HPC include the Beowulf Distributed Process Space (BProc) [5], cluster computing toolkits, like OSCAR [6] and Rocks [7], as well as a number of Single System Image (SSI) solutions, like Scyld [8] and Kerrighed [9]. Recent successes in OSs for HPC systems are Catamount on the Cray XT3/4 [10] and the Compute Node Kernel (CNK) on the IBM Blue Gene/L system [11]. A runtime environment (RTE) is a special middleware component that resides within the process space of an application and enhances the core features of the OS by providing additional abstraction (virtual machine) models and respective programming interfaces. Examples are message passing systems, like PVM and implementations of MPI, but also component frameworks, such as CCA, dynamic instrumentation solutions, like Dyninst [12], as well as visualization and steering mechanisms, such as CUMULVS [13]. This paper examines a recent trend in HPC system architectures toward “lean” compute node solutions and its impact on the middleware approach. It describes this trend in more detail with regards to changes in HPC hardware and software architectures and discusses the resulting paradigm shift in software architectures for middleware in modern HPC systems.

2

Modern HPC System Architectures

The emergence of cluster computing in the late 90’s made scientific computing not only affordable to everyone using commercial off-the-shelf (COTS) hardware, it also introduced the Beowulf cluster system architecture [14,15] (Fig. 1) with its single head node controlling a set of dedicated compute nodes. In this

786

C. Engelmann, H. Ong, and S.L. Scott Compute Node Interconnect Head Node

Compute Node

Compute Node

Compute Node

Compute Node

...

Users, I/O & Storage

Fig. 1. Traditional Beowulf Cluster System Architecture

Compute Node Interconnect Head Node

Compute Node

Compute Node

Compute Node

...

Compute Node

Users I/O Node

...

I/O Node

Service Node

...

Service Node

I/O & Storage

Fig. 2. Generic Modern HPC System Architecture

architecture, head node, compute nodes, and interconnects can be customized to their specific purpose in order to improve efficiency, scalability, and reliability. Due to its simplicity and flexibility, many supercomputing vendors adopted the Beowulf architecture either completely in the form of HPC Beowulf clusters or in part by developing hybrid HPC solutions. Most architectures of today‘s HPC systems have been influenced by the Beowulf cluster system architecture. While they are designed based on fundamentally different system architectures, such as vector, massively parallel processing (MPP), single system image (SSI), the Beowulf cluster computing trend has led to a generalized architecture for HPC systems. In this generalized HPC system architecture (Fig. 2), a number of compute nodes perform the actual parallel computation, while a head node controls the system and acts as a gateway to users and external resources. Optional service nodes may offload specific head node responsibilities in order to improve performance and scalability. For further improvement, the set of compute nodes may be partitioned (Fig. 3), tying individual service nodes to specific compute node partitions. However, a system‘s architectural footprint is still defined by its compute node hardware and software configuration as well as the compute node interconnect. System software, such as OS and middleware, has been influenced by this trend as well, but also by the need for customization and performance improvement. Similar to the Beowulf cluster system architecture, system-wide management and gateway services are provided by head and service nodes. However, in contrast to the original Beowulf cluster system architecture with its “fat” compute nodes running a full OS and a number of middleware services, today‘s HPC systems typically employ “lean” compute nodes (Fig. 4) with a basic OS and only a small

Middleware in Modern High Performance Computing System Architectures

787

Compute Node Interconnect

Partition Compute Node Interconnect Head Node

Compute Node

Compute Node

Compute Node

...

Compute Node

Users I/O Node

...

I/O Node

Service Node

...

Service Node

I/O & Storage

Fig. 3. Generic Modern HPC System Architecture with Compute Node Partitions

(a) Fat

(b) Lean

Fig. 4. Traditional Fat vs. Modern Lean Compute Node Software Architecture

amount of middleware services, if any middleware at all. Certain OS parts and middleware services are provided by service nodes instead. The following overview of the Cray XT4 [16] system architecture illustrates this recent trend in HPC system architectures. The XT4 is the current flagship MPP system of Cray. Its design builds upon a single processor node, or processing element (PE). Each PE is comprised of one AMD microprocessor (single, dual, or quad core) coupled with its own memory (1-8 GB) and dedicated communication resource. The system incorporates two types of processing elements: compute PEs and service PEs. Compute PEs run a lightweight OS kernel, Catamount, that is optimized for application performance. Service PEs run standard SUSE Linux [17] and can be configured for I/O, login, network, or system functions. The I/O system uses the highly scalable LustreTM [18,19] parallel file system. Each compute blade includes four compute PEs for high scalability in a small footprint. Service blades include two service

788

C. Engelmann, H. Ong, and S.L. Scott

PEs and provide direct I/O connectivity. Each processor is directly connected to the interconnect via its Cray SeaStar2TM routing and communications chip over a 6.4 GB/s HyperTransportTM path. The router in the Cray SeaStar2TM chip provides six high bandwidth, low latency network links to connect to six neighbors in the 3D torus topology. The Cray XT4 hardware and software architecture is designed to scale steadily from 200 to 120,000 processor cores. The Cray XT4 system architecture with its lean compute nodes is not an isolated case. For example, the IBM Blue Gene/L solution also uses a lightweight compute node OS in conjunction with service nodes. In fact, the CNK on the IBM Blue Gene/L forwards most supported POSIX system calls to the service node for execution using a lightweight remote procedure call (RPC). System software solutions for modern HPC architectures, as exemplified by the Cray XT4, need to deal with certain architectural limitations. For example, the compute node OS of the Cray XT4, Catamount, is a non-POSIX lightweight OS, i.e., it does not provide multiprocessing, sockets, and other POSIX features. Furthermore, compute nodes do not have direct attached storage (DAS), instead they access networked file system solutions via I/O service nodes. The role and architecture of middleware services and runtime environments in modern HPC systems needs to be revisited as compute nodes provide less capabilities and scale up in numbers.

3

Modern HPC Middleware

Traditionally, middleware solutions in HPC systems provide certain basic services, such as a message passing layer, fault tolerance support, runtime reconfiguration, and advanced services, like application steering mechanisms, user interaction techniques, and scientific data management. Each middleware layer is typically an individual piece of software that consumes system resources, such as memory and processor time, and provides its own core mechanisms, such as network communication protocols and plug-in management. The myriad of developed middleware solutions has led to the “yet another library” and “yet another daemon” phenomenons, where applications need to link many interdependent libraries and run concurrent to service daemons. As a direct result, modern HPC system architectures employ lean compute nodes using lightweight OSs in order to increase performance and scalability by reducing compute node OS and middleware to the absolute necessary. Basic and advanced middleware components are placed on compute nodes only if their function requires it, otherwise they are moved to service nodes. In fact, middleware becomes an external application support, which compute nodes access via the network. Furthermore, single middleware services on service nodes provide support for multiple compute nodes via the network. They still perform the same role, but in a different architectural configuration. While middleware services, such as daemons, run on service nodes, RTEs continue to run on compute nodes either partially by interacting with middleware services on service nodes or

Middleware in Modern High Performance Computing System Architectures

789

completely as standalone solutions. In both cases, RTEs have to deal with existing limitations on compute nodes, such as missing dynamic library support. While each existing HPC middleware solution needs to be evaluated regarding its original primary purpose and software architecture before porting it to modern HPC system architectures, new middleware research and development efforts need to take into account the described modern HPC system architecture features and resulting HPC middleware design requirements.

4

Discussion

The described recent trend in HPC system architectures toward lean compute node solutions significantly impacts HPC middleware solutions. The deployment of lightweight OSs on compute nodes leads to a paradigm shift in HPC middleware design, where individual middleware software components are moved from compute nodes to service nodes depending on their runtime impact and requirements. The traditional interaction between middleware components on compute nodes is replaced by interaction between lightweight middleware components on compute nodes with middleware services on service nodes. Functionality. Due to this paradigm shift, the software architecture of modern HPC middleware needs to be adapted to a service node model, where middleware services running on a service node provide essential functionality to middleware clients on compute nodes. In partitioned systems, middleware services running on a partition service node provide essential functionality to middleware clients on compute nodes belonging to their partition only. Use case scenarios that require middleware clients on compute nodes to collaborate across partitions are delegated to their respective partition service nodes. Performance and Scalability. The service node model for middleware has several performance, scalability, and reliability implications. Due to the need of middleware clients on compute nodes to communicate with middleware services on service nodes, many middleware use case scenarios incur a certain latency and bandwidth penalty. Furthermore, central middleware services on service nodes represent a bottleneck as well as a single point of failure and control. Reliability. In fact, the service node model for middleware is similar to the Beowulf cluster architecture, where a single head node controls a set of dedicated compute nodes. Similarly, middleware service offload, load balancing, and replication techniques may be used to alleviate performance and scalability issues and to eliminate single points of failure and control. Slimming Down. The most intriguing aspect of modern HPC architectures is the deployment of lightweight OSs on compute nodes and resulting limitations for middleware solutions. While the native communication system of the compute node OS can be used to perform RPC calls to service nodes in order to interact with middleware services, certain missing features, such as the absence of dynamic linking, are rather hard to replace.

790

C. Engelmann, H. Ong, and S.L. Scott

Service-Oriented Middleware Architecture. However, the shift toward the service node model for middleware has also certain architectural advantages. Middleware services may be placed on I/O nodes in order to facilitate advanced I/O-based online and/or realtime services, such application steering and visualization. These services require I/O pipes directly to and from compute nodes. Data stream processing may be performed on compute nodes, service nodes, and/or on external resources. System partitioning using multiple I/O nodes may even allow for parallel I/O data streams.

5

Conclusion

This paper describes a recent trend in modern HPC system architectures toward lean compute node solutions, which aim at improving overall system performance and scalability by keeping the compute node software stack small and simple. We examined the impact of this trend on HPC middleware solutions and discussed the resulting paradigm shift in software architectures for middleware in modern HPC systems. We described the service node model for modern HPC middleware and discussed its software architecture, use cases, performance impact, scalability implications, and reliability issues. With this paper, we also try to engage the broader middleware research and development community beyond those who are already involved in porting and developing middleware solutions for modern HPC architectures. Based on many conversations with researchers, professors, and students, we realize that not many people in the parallel and distributed system research community are aware of this trend in modern HPC system architectures. It is our hope that this paper provides a starting point for a wider discussion on the role and architecture of middleware services and runtime environments in modern HPC systems.

References 1. Geist, G.A., Beguelin, A., Dongarra, J.J., Jiang, W., Manchek, R., Sunderam, V.S.: PVM: Parallel Virtual Machine: A Users’ Guide and Tutorial for Networked Parallel Computing. MIT Press, Cambridge, MA, USA (1994) 2. Snir, M., Otto, S., Huss-Lederman, S., Walker, D., Dongarra, J.: MPI: The Complete Reference. MIT Press, Cambridge, MA, USA (1996) 3. SciDAC Center for Component Technology for Terascale Simulation Software (CCTTSS): High-Performance Scientific Component Research: Accomplishments and Future Directions. Available at http://www.cca-forum.org/db/news/ documentation/whitepaper05.pdf (2005) 4. Kesselman, C., Foster, I.: The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann Publishers, San Francisco, CA, USA (1998) 5. Hendriks, E.: BProc: The Beowulf distributed process space. In: Proceedings of 16th ACM International Conference on Supercomputing (ICS) 2002, New York, NY, USA (2002) 129–136

Middleware in Modern High Performance Computing System Architectures

791

6. Hsieh, J., Leng, T., Fang, Y.C.: OSCAR: A turnkey solution for cluster computing. Dell Power Solutions (2001) 138–140 7. Papadopoulos, P.M., Katz, M.J., Bruno, G.: NPACI Rocks: Tools and techniques for easily deploying manageable Linux clusters. In: Proceedings of IEEE International Conference on Cluster Computing (Cluster) 2001, Newport Beach, CA, USA (2001) 8. Becker, D., Monkman, B.: Scyld ClusterWare: An innovative architecture for maximizing return on investment in Linux clustering. Available at http://www. penguincomputing.com/hpcwhtppr (2006) 9. Morin, C., Lottiaux, R., Valle, G., Gallard, P., Utard, G., Badrinath, R., Rilling, L.: Kerrighed: A single system image cluster operating system for high performance computing. In: Lecture Notes in Computer Science: Proceedings of European Conference on Parallel Processing (Euro-Par) 2003. Volume 2790., Klagenfurt, Austria (2003) 1291–1294 10. Brightwell, R., Kelly, S.M., VanDyke, J.P.: Catamount software architecture with dual core extensions. In: Proceedings of 48th Cray User Group (CUG) Conference 2006, Lugano, Ticino, Switzerland (2006) 11. Moreira, J., Brutman, M., Castanos, J., Gooding, T., Inglett, T., Lieber, D., McCarthy, P., Mundy, M., Parker, J., Wallenfelt, B., Giampapa, M., Engelsiepen, T., Haskin, R.: Designing a highly-scalable operating system: The Blue Gene/L story. In: Proceedings of International Conference on High Performance Computing, Networking, Storage and Analysis (SC) 2006, Tampa, FL, USA (2006) 12. Buck, B.R., Hollingsworth, J.K.: An API for runtime code patching. Journal of High Performance Computing Applications (2000) 13. Kohl, J.A., Papadopoulos, P.M.: Efficient and flexible fault tolerance and migration of scientific simulations using CUMULVS. In: Proceedings of 2nd SIGMETRICS Symposium on Parallel and Distributed Tools (SPDT) 1998, Welches, OR, USA (1998) 14. Sterling, T.: Beowulf cluster computing with Linux. MIT Press, Cambridge, MA, USA (2002) 15. Sterling, T., Salmon, J., Becker, D.J., Savarese, D.F.: How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters. MIT Press, Cambridge, MA, USA (1999) 16. Cray Inc., Seattle, WA, USA: Cray XT4 Computing Platform Documentation. Available at http://www.cray.com/products/xt4 (2006) 17. Novell Inc.: SUSE Linux Enterprise Distribution. Available at http://www.novell. com/linux (2006) 18. Cluster File Systems, Inc., Boulder, CO, USA: Lustre Cluster File System. Available at http://www.lustre.org (2006) 19. Cluster File Systems, Inc., Boulder, CO, USA: Lustre Cluster File System Architecture Whitepaper. Available at http://www.lustre.org/docs/whitepaper.pdf (2006)

Usability Evaluation in Task Orientated Collaborative Environments Florian Urmetzer and Vassil Alexandrov ACET Centre, The Universtity of Reading, Reading, RG6 6AY {f.urmetzer, v.n.alexandrov}@reading.ac.uk

Abstract. An evaluation of the usability is often neglected in the software development cycle, even that it was shown in the past that a careful look at the usability of a software product has impact on its adoption. With the recent arising of software supporting collaboration between multiple users, it is obvious that usability testing will get more complex because of the multiplication of user interfaces and their physical distribution. The need for new usability testing methodologies and tools for their support under these circumstances is one consequence. This paper widens the methodologies of usability evaluation to computing systems supporting the solving of a collaborative work task. Additionally details of a distributed screen recording tool are described that can be used to support usability evaluation in a collaborative context. Keywords: collaboration, groupware, evaluation, usability, testing, HCI.

1 Introduction Collaborations are well known to better the outcome of projects in industry as in academia. The theory is based on the availability of specialist knowledge and workforce through different collaborators [1]. Software tools to support any form of collaboration are widely used in scientific as well as in business applications. These include tools to enhance communications between collaborators, like for example Access Grid [2] or portals enabling text based exchange of information [3]. Additionally there are tools to support virtual organizations in their work tasks. These tools have the aim to enable two or more individuals to work in distributed locations towards one work task using one computing infrastructure. An example is the Collaborative P-Grade portal [4] or distributed group drawing tools as described in [5]. The P-Grade portal enables distributed users to make contributions to one workflow at the same moment in time. Therefore users can actively change, build and enhance the same workflow in an informed fashion using advantages of collaborative working models. Similarly the distributed group drawing tool enables multiple users to draw and share the drawing via networks. In this paper, first the state of the art in usability evaluation methods will be looked at, detailing the current methods for usability testing. Then an example of a collaborative system will be given to provide an understanding of the terminology Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 792–798, 2007. © Springer-Verlag Berlin Heidelberg 2007

Usability Evaluation in Task Orientated Collaborative Environments

793

collaborative software. The authors will argue that current methods have to be revised under the new collaborative software paradigm. Finally the authors will introduce a multiple screen recorder, which is currently under development. This recorder may help to enable usability evaluation in collaborative environments and therefore enable a revision of the current methodologies, namely observational studies, under the new collaborative software paradigm.

2 Methods for Usability Evaluation and Testing Usability testing has been defined as the systematic way to observe real users handling a product and gathering data about the interaction with the product. The tests can determine if the product is easy or difficult to use for the participants [6]. Therefore the aim of enquiries into usability is to uncover problems concerning the usage of existing interfaces by users in realistic settings. The outcomes of these tests are suggestions to better the usability and therefore recommendations to change the interface design [7]. Historically usability tests have been at the end of software projects to enable the measurement of usability of the outcome of the project and to better the interface. One example is automatic user interface testing, where software evaluates with the help of an algorithm a programmed specification on a user interface. These automatic methods have however been described to have scaling problems when evaluating complex and highly interactive interfaces [7]. In recent years usability testing has changed to the user centred paradigm and is therefore implemented more and more into the software design cycle. The testing involving target users however is still an important part of the enquiry [8], [9], [10]. Two examples are the use of design mock-ups and the use of walkthroughs. For example Holzinger [11] used drawn paper mock-ups of interfaces to elicit information from potential users of a virtual campus interface. Paper mock-ups were described to be used by the researchers to allow the user to interact in single and group interviews before the interface was programmed. The researchers described the process and methods applied in the project as useful for identifying the exact look and feel of an interface and the requirements of the users’ usability needs. A second example for a user centric design approach is the usability walkthrough. Using focus groups, the walkthrough ensures a quality interface and the efficiency of the software production. This process is indicated to include users as well as product developers and usability experts in a group setting during the design of the software. This means that all attendees discuss the functionality and the design of the software before the programming has started [12], [13]. The two example methods in use to define the design of interfaces are however stressed to be additions to tests with users using the interfaces after the programming was done [7]. In usability evaluations, after the software is programmed, most commonly observational methods are chosen to determine the use of interfaces. For example the researcher would observe chosen participants performing tasks while using the tested interface. The tasks should be related to the everyday jobs that need to be performed by the users when using the software, for example the question, ‘can

794

F. Urmetzer and V. Alexandrov

a user send e-mails using newly developed e-mail software?’ However as Dumas [6] points out the tasks to be performed should be a set of problem tasks, instead of tasks that are seen to be rather easy. This is seen as a way of making sure that the interface is not verified, but critically analysed. While the user is performing the number of tasks provided, the tester would typically ask the participant to speak aloud what he is thinking during the execution of the task. This gives the tester an inside to the participants thoughts and problems, for example a participant would say ‘how do I send this e-mail now’ indicating that he is looking for the send button. This method is named think aloud [14], [16]. An extension of the think aloud is to make two people work on one interface and make them communicate about their actions. This is seen as more natural then just to talk to oneself [6]. A technology supported extension of usability testing is remote usability testing. This is described to enable testing participants in their natural surroundings and with the opportunity not to have to travel to the participants. The researchers in [16] have described their study to involve a video conferencing system, shared whiteboard and shared applications software on the participants’ side. This video conferencing software will then enable the observer on his side to see the image of the participant through the video conferencing system and to see the computer screen using the shared applications software. The shared whiteboard would be used to give instructions and tasks to the test participants. In a more recent publication the researchers have used remote usability testing using Microsoft NetMeeting to share the test participants screen and a software called ‘snag-it’ to capture the shared content from the observers computer screen. In this particular case speaker phones on the participants’ side and a tape recorder on the observers side was used to capture think aloud comments from the participant. The tasks to be performed during the test were given to the user in written form via postal mail. The major outcome of the study was that they found that remote usability testing is as accurate as face to face testing [17].

Fig. 1. A remote usability test, using observational methods, the researcher is observing the participant using the software via the internet and a phone network. The phone is used for think aloud protocols and the participants screen is shared using screen sharing software.

Usability Evaluation in Task Orientated Collaborative Environments

795

2.1 The Collaborative Challenge With the CoWebs (collaborative web spaces, e.g. wiki) it is possible to move more away from a text only media and to integrate collaborative multimedia allowing the group editing of, for example, formulas [18]. Supporting the CoWeb theory is the advent of virtual organisations in the Grid paradigm. Individuals can form virtual collaborations and therefore share information and resources in electronic means across organisations [19]. An example for a tool produced under these paradigms is the Collaborative PGrade Portal [4]. This portal enables multiple users to make contributions to one workflow at the same moment in time. This is possible through a central server, where every participant in the collaboration logs in and receives an editable representation of the workflow. The workflow jobs are lockable by only one user in the collaboration. Through the locking, the job becomes editable for the user and is shown as locked to the other users (see fig. 3). This mechanism allows users to actively change, build and enhance the same workflow while being in different geographic locations.

Fig. 2. Example of the P-Grade Portals central server sharing a workflow. The locks can be seen in the interfaces indicating that the nodes are not editable.

This shift from single user applications to collaborative applications needs a reflection in the methodology of usability testing. At present it is not possible to run any form of usability tests in these collaborative settings. However it would be highly interesting to report such findings and therefore to gather more detailed information on the use and of the requirements of such systems.

796

F. Urmetzer and V. Alexandrov

3 Enabling Usability Evaluation in Collaborative Environments The discussion above has shown that there are developed methods to enable usability evaluation when single interfaces are used. However there is need for tools to investigate multiple interfaces. This will have an important impact on the ability to conduct user interface evaluation and therefore to uncover problems concerning the usage of existing interfaces in realistic settings when using collaborative software. Therefore the researchers of this paper have proposed a software allowing distributed usability evaluation using observational methods. The software allows the recording of chosen screen content and cameras, including the sound, of clients involved in a collaborative task. Therefore the clients screen content is captured and transferred over the network to the recording server where the video is stored to disc.

Fig. 3. Test computers are capturing screens and cameras and transfer the images and audio to a recording server. From there the usability tester can playback the recorded material.

The capturing of screens has been realized using the Java media framework. The area of the screen to be captured can be defined using x and y locations of the screen. Therefore not all the screen has to be captured, but only the part of the screen showing the application looked at by the researchers is captured. Before the video is transferred, a timestamp is added, so that it is possible to synchronize the replay of the different clients at a later stage. The transport is organized using RTP (Real Time Protocol) from the client to the server. Finally the video is written to disc on the server. A researcher interested in analyzing the material gathered can then play the material back from the server to a single client. This will then transfer the video synchronized to one client.

4 Future Directions and Conclusion The future tasks in the project will be grouped into three areas. First more functionality will be added to the user interfaces of the clients as well as of the researchers interface. On the client side a choice of whether a camera and/or

Usability Evaluation in Task Orientated Collaborative Environments

797

audio is shared has to be added in the graphical user interface. These selections then have to be fed back to the user in a smaller window on the screen to enable the user to gain more control over the shared material. Additionally an option has to be created, where the user can choose the size of the transferred video. This will prevent the overload of networks and of the personal computer of the participant. The researcher has to have the option to choose the video feeds during playback and to see the video feeds during recording. This should enable the researcher to interfere in the test when needed. Second it should be possible to add a tag to the recorded material. Tagging is the adding of information to the recorded material, which can be in text form as well as in graphical form. This should enable scientists to describe moments in time and code the material for further analysis. This tool is seen as important by the researchers, given that during the analysis it will be hard to keep track of where usability problems occur, because the attention of the researcher has to be multiplied by the number of participants. Similar, tagging mechanisms have been used in meeting replays [20]. Finally a messenger structure to allow textual communication between the tester and the participant has to be added. This may only be text based, however could be voice based or even video based. However intensive testing of network overload and other issues has to be done before. In conclusion the researchers have shown in a prove of concept [21] that it is technically possible to enable the recording of multiple screens and cameras on one server. This prove of concept is now programmed in one server and multiple clients application. This application will enable researchers to conduct observational studies in distributed collaborative computing environments and therefore to evaluate user interfaces. The current methods used for such interface evaluations have been described in this paper in great detail and the new development is described from the classical observational study of one user to the observation of groups of users and their screens. At this point it is taken as proved that the observation of the users via networked screen and camera recording is possible, however tests are being conducted at present to see how effective the new methodologies are from a human factor point of view.

References 1. Dyer, J.H., Collaborative advantage: winning through extended enterprise supplier networks. 2000, Oxford: Oxford University Press. xii, 209. 2. AccessGrid. Access Grid: a virtual community. [Internet] 2003 [cited 2006 13 March 2003]; http://www-fp.mcs.anl.gov/fl/accessgrid/]. 3. Klobas, J.E. and A. Beesley, Wikis: tools for information work and collaboration. Chandos information professional series. 2006, Oxford: Chandos. xxi, 229 p. 4. Lewis, G.J., et al., The Collaborative P-GRADE Grid Portal. 2005. 5. Saul, G., et al., Human and technical factors of distributed group drawing tools. Interact. Comput., 1992. 4(3): p. 364-392. 6. Dumas, J.S. and J. Redish, A practical guide to usability testing. Rev. ed. 1999, Exeter: Intellect Books. xxii, 404. 7. Mack, R.L. and J. Nielsen, Usability inspection methods. 1994, New York; Chichester: Wiley. xxiv, 413.

798

F. Urmetzer and V. Alexandrov

8. Carroll, C., et al., Involving users in the design and usability evaluation of a clinical decision support system. Computer Methods and Programs in Biomedicine, 2002. 69(2): p. 123. 9. Nielsen, J., Usability engineering. 1993, Boston; London: Academic Press. xiv, 358. 10. Faulkner, X., Usability engineering. 2000, Basingstoke: Macmillan. xii, 244. 11. Holzinger, A., Rapid prototyping for a virtual medical campus interface. Software, IEEE, 2004. 21(1): p. 92. 12. Bias, R., The Pluralistic Usability Wlakthrough: Coordinated Emphaties, in Usability Inspection Methods, J. Nielsen and R.L. Mack, Editors. 1994, Wiley & Sons, Inc: New York. 13. Urmetzer, F., M. Baker, and V. Alexandrov. Research Methods for Eliciting e-Research User Requirements. in Proceedings of the UK e-Science All Hands Meeting 2006. 2006. Nottingham UK: National e-Science Centre. 14. Thompson, K.E., E.P. Rozanski, and A.R. Haake. Here, there, anywhere: remote usability testing that works. in Proceedings of the 5th conference on Information technology education table of contents. 2004. Salt Lake City, UT, USA: ACM Press, New York, NY, USA. 15. Monty, H., W. Paul, and N. Nandini, Remote usability testing. interactions, 1994. 1(3): p. 21-25. 16. Thompson, K.E., E.P. Rozanski, and A.R. Haake, Here, there, anywhere: remote usability testing that works, in Proceedings of the 5th conference on Information technology education. 2004, ACM Press: Salt Lake City, UT, USA. 17. Dieberger, A. and M. Guzdial, CoWeb - Experiences with Collaborative Web Spaces, in From Usenet to CoWebs: interacting with social information spaces, C. Lueg and D. Fisher, Editors. 2003, Springer: London. p. x, 262 p. 18. Foster, I., C. Kesselman, and S. Tuecke, The Anatomy of the Grid: Enabling Scalable Virtual Organizations. International Journal on Supercomputing Applications, 2001. 15(3). 19. Buckingham Shum, S., et al. Memetic: From Meeting Memory to Virtual Ethnography & Distributed Video Analysis. in Proc. 2 nd International Conference on e-Social Science. 2006. Manchester, UK: www.memetic-vre.net/publications/ICeSS2006_Memetic.pdf. 20. Urmetzer, F., et al. Testing Grid Software: The development of a distributed screen recorder to enable front end and usability testing. in CHEP 06. 2006. India, Mumbai.

Developing Motivating Collaborative Learning Through Participatory Simulations Gustavo Zurita1, Nelson Baloian2, Felipe Baytelman1, and Antonio Farias1 1

Management Control and Information Systems Department - Business School, Universidad de Chile [email protected], [email protected], [email protected] 2 Computer Science Department – Engineering School. Universidad de Chile [email protected]

Abstract. Participatory simulations are collaborative group learning activities whose goals are to improve teaching and learning, increasing motivation inside the classroom by engaging the learner in games simulating a certain system they have to learn about. It has been already applied to students of primary and secondary educational levels, however there are still not experiences reported with higher level students, although there are many learning subjects to which this technique can be applied. This paper presents the implementation of a framework-like tool for supporting learning activities in a business school with undergraduate students using mobile devices over an ad-hoc network. Keywords: Handhelds. Collaborative Learning. Participatory Simulation. Gestures. Sketches. Freehand-input based. Learning and Motivation.

1 Introduction Any experienced teacher knows that without the proper motivation for students to engage in a learning experience, the otherwise best designed experiences will be unsuccessful. Dick and Carey [8] state: “Many instructors consider the motivation level of learners the most important factor in successful instruction”. “Motivation is not only important because it is a necessary causal factor of learning, but because it mediates learning and is a consequence of learning as well” [20]. In other words, students who are motivated to learn will have greater success than those who are not. Participatory Simulation aims for students having “rich conceptual resources for reasoning abut and thoughtfully acting in playful and motivational spaces, and thus can more easily become highly engaged in the subject matter” [11]. It uses the availability of mobile computing devices to give each student the capability of simple data exchanges among neighboring devices [19], [4]. They enable students to act as agents in simulations in which overall patterns emerge from local decisions and information exchanges. Such simulations enable students to model and learn about several types of phenomena, [4] including those related to economics [4], [9]. Some research groups have implemented collaborative learning participatory simulations with handhelds and infrared beaming [16], and it has been found that this kind of activities Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 799–807, 2007. © Springer-Verlag Berlin Heidelberg 2007

800

G. Zurita et al.

provide various advantages for teaching and learning: (a) they introduce an effective instructional tool and have the potential to impact student learning positively across curricular topics and instructional activities [18], (b) they increase the motivation [12], [4], and (c) they generate positive effects in engagement, self-directed learning and problemsolving [12]. Although handheld’s most natural data-entry mode is the stylus, most currently available handheld applications adopt the PC application approach that uses widgets instead of freehand-input-based paradigms (via touch screens) and/or sketching, [6]. This paper presents a tool for implementing collaborative learning participatory simulations, having two general research goals: (a) propose a conceptual framework for specifying and developing participative simulation applications, and (b) to determine the feasibility of using this in undergraduate curricular contexts of the simulation activities both in terms of intended and actualized learning outcomes; particularly in the management area. An instance of the framework is described. Its implementation is simple, lightweight and fully based on wirelessly interconnected handhelds with an ad-hoc network.

2 Related Work A learning participatory simulation is a role-playing activity that helps to explain the coherence of complex and dynamic systems. The system maps a problem of the real world to a model with a fixed number of roles and rules. Global knowledge and patterns emerge in Participatory Simulations from local interactions among users and making decisions to understand the impact by an analysis and observation while doing and/or at the end of the activity. Researchers are highly interested in collaborative learning participatory simulations due to these simulations appear to make very difficult ideas around ‘distributed systems’ and ‘emergent behavior’ more accessible to students [19] motivating its learning process in a playful social space [4]. Various systems using different hardware devices have been already implemented: • A futures trading simulation described on [2] enhances the learning process of concepts such as price discovery, the open outcry trading method, trading strategies of locals and brokers, and the impact of interest rates on the treasury bond futures contract. • Thinking Tags [1] uses small nametag sized computers that communicate with each other. It was used to tech high-school students in a simulation of virus propagation and asked them to determine the rules of the propagation [5]. • NetLogo [17] is an environment for developing of learning participatory simulations for PCs. Simulations can be re-played, analyzed and compared with previous ones. An extension called HubNet [19] supports PCs and mobile devices for input and output. • Klopfer et al. [12] showed that the newer and more easily distributable version of Participatory Simulations on handhelds was equally as capable as the original Tag-based simulations in engaging students collaboratively in a complex problem-solving task. They feel that handhelds technology holds great promise for promoting collaborative learning as teachers struggle to find authentic ways to integrate technology into the classroom in addition to engaging and motivating students to learn science.

Developing Motivating Collaborative Learning Through Participatory Simulations

801

• A collaborative learning participatory simulation in form of a stock exchange was designed for master’s students in financial theory, using architectures based on a server and clients running on desktop PCs or laptops as well as on PDAs, [13]. • The SimCafé experiments belong to the sociological approach, aiming at validating and consolidating models [9], [4]. In this approach, participants are stakeholders and the witnesses of the emergence are domain experts, usually social scientists. Based on the literature above mentioned, we have identified that no system has yet been proposed or implemented for handhelds in a wireless ad-hoc network using a pen-based interface as main metaphor for user interaction.

3 Developing a Framework In accordance to [7] and [20] some factors, based on empirical evidence, to enhance motivation are: • • • • • •



Involve the learner. Learners must be involved in the ownership of the goals, strategies and assessment of that with which they are to be engaged. The students must to fell that they are in control of their own learning. Respond positively to questions posed by students can enhance intrinsic motivation. Furthermore, consideration should be given to what the learner brings to the classroom: experiences, interests, and other resources. Options and choices about the learning environment and the various curriculum components (persons, time, content, methods, etc.) must be available. Simulating the reality. Whatever the expected learning outcomes, there must be a direct connection with the real world outside the classroom. The shifting of responsibility for learning from the teacher to the student is fundamental to both content fulfillment and learner motivation. Feedback and reinforcement are essential to good teaching and effective learning. When learners are given positive reinforcement, a source of motivation is tapped. Evaluation should be based on the task, rather than comparison with performance of other students. Collaboration among learners is a very potent way in which an individual learner forms an interpretation of the environment and develops understanding in motivational spaces of social interactions.

Collaborative learning applications based on Participative Simulations are able to meet the requirements listed above. In order to generate, design and implement them the Teacher must define learning goals, artifacts to be exchanged, behavior variables and parameters, and rules and roles for playing the simulation. In order to configure the system for a collaborative learning participatory simulation, the Teacher may setup transferable objects, their behavior parameters, rules and participant roles. Then, the teacher explains the goal of the activity to the students, also describing objects, rules and roles, and how these concepts are represented in their handhelds. Rules, roles and goals should be designed to achieve a high social interaction between students, negotiation instances, and competition to encourage an active and motivated stance [13]. A startup process must ensure students will play an active and dynamic

802

G. Zurita et al.

role. This should be based on defining trading activities between students including Negotiation and Exchange of Objects which is supported by handhelds. These conditions depend on each particular application and may involve the following aspects: (a) type of exchange objects, (b) exchange amounts, (c) trade conditions, (d) parameters before and after the exchange, and (e) exchange objects. If students require assistance, our framework allows the teacher to wirelessly give them feedback and assessment. The teacher can (a) observe the simulation state of each participant device and (b) modify such state in order to solve the student inquiry.

Fig. 1. Conceptual framework

Once the simulation is done, the teacher must guide students’ understanding about the activity. In this way, the students will construct the learning objective together, analyzing different stages of the activity.

4 Applications Implemented Using the Framework We have implemented a lightweight platform for supporting the implementation of participatory simulation applications based on the framework proposed in section 3. Using this platform we have successfully implemented two applications for the scenarios proposed on the previous section. The platform is a collection of java-classes which can be extended to implement the desired scenario in a very fast and easy way. They allow the definition of new roles, new products and the rules which will govern the simulation. It also offers implementation of interfaces for assigning roles and exchange goods, which are should be extended to implement the details of the desired application. The platform also implements all the necessary actions to discover other applications of the same class running on handhelds within the ad-hoc network and opens the necessary communication channels to exchange data between them. 4.1 Trust Building Simulation This application is aimed to support the learning of concepts like reputation and trust by undergraduate students of business schools. In the simulated situation the roles of

Developing Motivating Collaborative Learning Through Participatory Simulations

803

vendors and customers are implemented. Customers are required to maintain a certain basket with products they can acquire goods from vendors. Products have a certain lifespan which is known at the purchase moment. Customers have to replace them when they expire. The simulation assigns a random lifetime to the products around the expected one. If the product fails before the lifetime offered by the vendor customers may claim money refund or product replacement. Vendors can advertise their products freely offering a longer lifetime to attract customers or a shorter to gain customers’ trust. They may refuse to refund the money or replace the product in order to make better profit. In the real world, the customers’ trust to the companies is built by a repetitive interaction. A positive evaluation usually is generated when products quality is satisfactory or, even, when the company reacts appropriately after a client’s complain about bad products (or services). When the simulation finishes, students must analyze these results and conclude about how clients’ trust impacts the companies’ profit.

Fig. 2. a) Teacher drags a student icon into the “vendor” area to assign him a new role. b) Teacher can create new goods using free-hand drawings.

Setup phase: To assign roles to the students (customer or vendor) the teacher uses the "activity administration" mode. Students without roles are displayed in the middle of the screen over a white area. The right area of handhelds (figure 2.a) has “vendors” and left belongs to “consumers”. The teacher assigns roles by drag-and-dropping the icon of a student into the desired area. Since in this application goods may be anything, they are created by the teacher by drawing a sketch and surround it by a rectangle. This will produce a "good icon", displaying an awareness of successful creation and displaying a reduced icon of the original scratch in the bottom bound of the screen. Then, additional "goods icons" may be created, as seen in Figure 2.b. Double-clicking on a "goods icon" will open a window for defining default variables for that type of good. In this application, instance variables are "original price", "production time" and "product expected life". Once goods have been created their icons will show up in "activity administration mode". The teacher assigns goods to patricians by dragging the goods icons over the vendor icons to allow them to produce this item, or over consumer icons to ask them to acquire this item.

804

G. Zurita et al.

Simulation phase: The simulation starts by vendors offering their products verbally and customers looking for the most convenient offer in terms of price and lifetime. Once a customer faces a vendor, they face their handhelds in order to activate the IrDA communication. This will enable the customer to receive information about the vendor’s reputation and allow customer and vendor make the transaction. Figure 3 shows the three steps required to complete the transaction. When facing both handhelds the top region of the screen is converted in the negotiation area. The vendor drags the product to this area, which will appear on the buyer’s negotiation area, who accepts it by dragging it to the area of products owned. The clients keep information of the reputation of each vendor with a number ranking the vendors. At the beginning it has no value. The client can set and alter this number after according to the interaction they had with vendor and also by asking other customers about the opinion they have from a vendor.

Fig. 3. Three steps in the trade process. The vendor offers a product by dragging the object, a customer accepts it, the vendor’s stock and customer requirements/acquired lists get updated.

4.2 Stock Market Simulation This application is about learning how offer and demand are impacted by expectations and speculations. This is learnt by simulating a stock market situation. The only role present in this simulation is the one of the investor who has to buy and sell shares trying to make profit. The teacher takes part in the simulation introducing changes in the scenario by varying the prices of the overall company. She can also participate by offering and buying shares in order to create unexpected situations simulating speculations. Students and teacher can after the simulation analyze the reactions of the simulated marked.

Developing Motivating Collaborative Learning Through Participatory Simulations

805

Setup phase: In this scenario there is no role assignment action since all have the same one. The goods are now the shares of different companies the investors can buy and sell. The teacher creates the different shares in the same way like the previous application. Every investor receives an initial amount of shares and money. Simulation phase: The simulation starts by letting the investors offer and buy their shares. Figure 4 a) shows the interface of the application. Students can see the amount of shares and their value, value and amount, and a small graph with the history of the values with a pull-down menu. Shares are traded using IrDA detection. Figure 4 b) shows the three steps necessary to transfer shares among students when they agree on the price and amount of the shares to be traded. When facing both handhelds, from the buyer and seller, the region at the top of the screen is converted in the trade area. The seller drags the object representing the share to this area, triggering at the buyer’s handheld a dialog in order to enter the amount of shares and the money. Then the data of both is updates accordingly.

Fig. 4. a) The student’s interface. b) The selling buying sequence.

5 Discussion and Future Work First results of this ongoing work have shown us that mobile technology is a right approach for implementing participatory simulations. In fact, one of the most motivating factors of this kind of learning activities is the face-to-face interaction students can have among each other. Technology plays a very subtle yet important role, letting the social interaction to be at the center of the experience. On the other hand, we could experience that the platform is really a helpful tool for supporting the development of applications implementing participatory simulations and other games that are based on the exchange of artifact between the participants. The development time required for the subsequent applications can be reduced to less than 1/3 of the time. We believe that the most significant contribution of the work reported here is to provide a conceptual framework for applications of collaborative learning

806

G. Zurita et al.

participatory simulations, which is easy to adapt to many subject-matter content knowledge and undergraduate curricular integration and encouraging the adoption of learner-centered strategies. The teachers, who pre-evaluate the application, suggest that the same technologies and ideas could be used across many subject matter areas. The design of effective learning environments of our conceptual framework have included (a) learner centered environment (learners construct their own meanings), (b) knowledge-centered environment (learners connect information into coherent wholes and embedding information in a context), (c) assessment-centered environment (learner use formative and summative assessment strategies and feedback), and (d) community-centered environments (learner work in collaborative learning norms). The next phase of our investigations will develop and explore more subject-specific applications and learning and motivational measures at the student level. We are also working on developing an application which can let the teacher define a participatory simulation application without having to program a single line, only defining the roles, products and rules for exchanging products. In the current platform a language for defining these rules which could be used to generate the application is missing. Acknowledgments. This paper was funded by Fondecyt 1050601.

References 1. Andrews, G., MacKinnon, K.A. Yoon, S.: Using ”Thinking Tags” with Kindergarten Children: A Dental Health Simulation, Journal of Computer Assisted Learning, 19 (2), (2003), 209–219. 2. Alonzi, P., Lange, D., Betty, S.: An Innovative Approach in Teaching Futures: A Participatory Futures Trading Simulation, Financial Practice and Education, 10(1), (2000), 228-238. 3. Castro, B., Weingarten, K.: Towards experimental economics, J. of Political Economy, 78, (1970), 598–607. 4. Colella, V.: Participatory simulations: Building collaborative understanding through immersive dynamic modeling”. The Journal of the Learning Sciences 9, 2000, pp. 471–500. 5. Colella, V., Borovoy, R., Resnick, M.: Participatory simulations: Using computational objects to learn about dynamic Systems, Conf. on Human Factors in Computing Systems, (1998), 9 – 10. 6. Dai, G., Wang, H.: Physical Object Icons Buttons Gesture (PIBG): A new Interaction Paradigm with Pen, Proceedings of CSCWD 2004, LNCS 3168, (2005), 11-20. 7. Dev, P. (1997). Intrinsic motivation and academic achievement. Remedial & Especial Education. 18(1) 8. Dick, W., & Carey, L. (1996). The systematic design of instruction (4th ed.). New York: Longman. 9. Guyot, P., Drogoul, A.: Designing multi-agent based participatory simulations, Proccedings of 5th Workshop on Aget Based Simulations, (2004), 32-37. 10. Hinckley, K., Baudisch, P., Ramos, G., Guimbretiere, F.: Design and Analysis of Delimiters for Selection-Action Pen Gesture Phrases in Scriboli, Proceeding of CHI 2005, ACM, (2005), 451-460. 11. Klopfer, E., Yoon, S., Perry, J: Using Palm Technology in Participatory Simulations of Complex Systems: A New Take on Ubiquitous and Accessible Mobile Computing, Journal of Science Education and Technology, 14(3), (2005), 285-297. 12. Klopfer, E., Yoon, S., Rivas, L.: Comparative analysis of Palm and wearable computers for Participatory Simulations, Journal of Computer Assisted Learning, 20, (2004), 347–359.

Developing Motivating Collaborative Learning Through Participatory Simulations

807

13. Kopf, S., Scheele, N. Winschel, L., Effelsberg, W.: Improving Activity and Motivation of Students with Innovative Teaching and Learning Technologies, International Conference on Methods and Technologies for Learning (ICMTL), WIT press, (2005), 551 – 556. 14. Landay, J., Myers, B.: Sketching interfaces: Toward more human interface design, IEEE Computer, 34(3), (2001), 56-64 15. Long, A., Landay, J., Rowe, L.: PDA and gesture Use in Practice: Insights for Designers of Penbased User Interfaces, Retrieved on 2006, December, from http:// bmrc.berkeley.edu/ research/publications/1997/142/clong.html 16. Soloway, E., Norris, C., Blumenfeld, P., Fishman, R., Marx, R:: Devices are Ready-at-Hand, Communications of the ACM, 44(6), (2001), 15–20 17. Tisue, S., Wilensky, U.: NetLogo: A simple environment for modeling complexity, International Conference on Complex Systems, (2004). 18. Vahey, P., Crawford, V.: Palm Education Pioneers Program: Final Evaluation Report, SRI International, Menlo Park, CA, (2002). 19. Wilensky, U., Stroup, W.: Learning through participatory simulations: Network-based design for systems learning in Classrooms, Proceedings of CSCL’99, Mahwah, NJ, (1999), 667-676. 20. Wlodkowski, R. J. (1985). Enhancing adult motivation to learn. San Francisco: Jossey-Bass.

A Novel Secure Interoperation System Li Jin and Zhengding Lu Department of Computer Science & Technology, Huazhong University of Science & Technology, Wuhan 430074, China [email protected]

Abstract. Secure interoperation for a distributed system, such as a multidomain system, is a complex and challenging task. The reason for this is that more and more abnormal requests and hidden intrusions make the static access control policies to be fragile. The access control model presented in this paper approaches this problem by proposing a concept of “trust-level”, which involves a statistical learning algorithm, an adaptive calculating algorithm and a self-protecting mechanism to evaluate a dynamic trust degree and realize secure interoperations. Keywords: Secure Interoperation; Trust-level; Adaptive.

1 Introduction Traditional access control systems are defined as granting or denying requests to restrict access to sensitive resources, the main purpose of which is preventing the system from ordinary mistakes or known intrusions. The development of network and distributed technology has made a large revolution in information security area. Sharing information without sacrificing the privacy and security has become an urgent need. The emergence of Trust Management has promised a novel way to solve these problems. Many researchers have introduced the idea of “trust” to improve a system’s security degree. It always involves a system’s risk-evaluation or a user’s historical reputation to decide whether they are “trust” or not. It is insufficient to deal with such unexpected situations as dishonest network activities, identity spoofing or authentication risk. There are many researchers who have amply discussed the importance of “trust” in a dynamic access control system and reached many achievements. However, how to change an abstract concept of “trust” into a numeric value was insufficiently discussed. The motivation of this paper is drawn by this idea. To solve these problems, we introduced a new quantitative concept “trust-level” to access control policies and developed a novel Adaptive Secure Interoperation System using Trust-Level (ASITL). In Section 2, we discuss some related achievements of secure interoperation and trust management in recent years. We describe the whole architecture and working flow of the ASITL in Section 3. The trust evaluation module is discussed in Section 4. We also present an interesting example in Section 5. Concluding remarks is added in Section 6. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 808–814, 2007. © Springer-Verlag Berlin Heidelberg 2007

A Novel Secure Interoperation System

809

2 Related Works Several research efforts have been devoted to the topic of trust strategies in secure interoperations and trust management. Ninghui Li and John C. Mitchell proposed RT [1], which combines the strengths of Role-based access control and Trust-management systems. RT has been developed into a systematic theory. Some trust services, such as trust establishment, negotiation, agreement and fulfillment were reported in [2], [3]. Although they concluded many security factors that might influence the trust degree, they did not propose a formalized metric to quantize it. Furthermore, Elisa B. et al. discussed secure knowledge management, focusing on confidentiality, trust, and privacy [4] and Kilho S. et al. presented a concrete protocol for anonymous access control that supported compliance to the distributed trust management model [5], both of which represented novel achievements in this area. However, the trust evaluation methods have not been essentially developed. Comparing with the above research efforts, our framework leads to a number of advantages in a secure interoperation environment. • A dishonest user can not deceive the authentication server to reach his true intention. Once a dishonest network activity is detected, the user’s trust-level will be decreased and he will not be granted any further privileges. Therefore, many potential risks can be efficiently avoided. • To gain a higher trust-level, a user seeking to any advanced service has to submit correct certificate and obey rules all the time. • With a statistical learning algorithm, event a new intrusion or an unknown event can also be learned and added into the abnormal events DB.

3 Framework for ASITL ASITL is a self-adaptive secure interoperation system. Different from traditional authentication systems, it involves dynamic trust evaluation mechanism. It consists of three main parts: certificate authentication mechanism, authentication server and trust evaluation module. Each part has its own duty as follows: Certificate authentication mechanism. It verifies the validity of certificates with the certificates DB. Authentication server. With the certificate authentication mechanism, access control policies, and if necessary, trust-level evaluation module, it decides whether or not to grant a request of the current user. Trust evaluation module. This module includes two parts: abnormal judging mechanism and trust-level calculating algorithm. Abnormal judging mechanism involves a self-adaptive statistical learning algorithm which uses a probability method to define a class for an abnormal event. And trust-level calculating mechanism defines a mathematic model to calculate a trust-level value with an abnormal event’s kind and its occurrence number.

810

L. Jin and Z. Lu

With above three main parts, we can describe a typical secure session in ASITL: Firstly, a user sends a request to the authentication server. And according to user’s request and access control policies, the authentication server asks for some necessary certificates. Secondly, the user submits the needed certificates. If the certificates can satisfy the policies, the user’s request will be transmitted to the corresponding application servers. A secure interoperation is finished. Otherwise, the authentication server sends further authentication requirements to the user and trust evaluation module starts to work at once. Thirdly, the user has to send other certificates once more to further proof his identity. And the authentication server continues to authenticate the current user, at meanwhile, update the trust-level for the user constantly. Finally, when the user’s trust-level is beyond the system’s threshold value, the current session will be canceled and the user will be banned in the system for a predefined time-out period. Otherwise, the user has to continue to submit some more certificates to verify his identity until his certificates satisfy the request and access control policies.

4 Trust Evaluation To maintain consistency and simplicity, authentication server generates a user ID and maintains a history record for each user. Generally, a trust observation period is a fixed interval of time, i.e., a threshold defined by system, or a period between two audit-related events. 4.1 Basic Definitions Each network event has it own features. In network security area, researchers often abstractly divide a network event into many significant features. Similarly, common kinds of features can be concluded into a class, which might be related to a certain network event. Before describing the trust evaluation process in detail, we give the basic definitions and theorems. Definition 1: Every network event contains a set of intrinsic features. When we analysis a network event, some of these features are essential, some of them are irrelevant. We call those essential features as key feature, named feature. Definition 2: A feature can be divided into a mixture one or more topic kinds, named classes, which are associated with different kinds of network events. Theorem 1: Supposing that 1. An event E in the abnormal events DB can be described with a feature set F ; 2. All features f ∈ F are mutually exclusive and are associated with one or more of a set of classes C k ; 3. A suspicious event Ei is observed by a feature set FJ = { f 1 , f 2 ,..., f j ,... f J } ;

Then the index I of the most probable event Ei is given by

A Novel Secure Interoperation System

(

I = arg max ∑ log p(C f ( j ) | Ei ) − log p (C f ( j ) ) i

J

)

811

(1)

p( X ) denotes the probability of event X and C f ( j ) is the class that feature f j is assigned.

4.2 Working Flow of Abnormal Judging

With the definitions and theorems in above, we realize the self-adaptability of abnormal judging mechanism as follows: Step 1: Initialize the events training set by extracting general features from large amount of abnormal events and learning to deduce some basic rules from current abnormal feature set. Step 2: Receive an abnormal event which is needed to be classified. Step 3: Extract the features and send them to the Event Induction module. If it can be divided into a known class, its abnormal kind will be transferred to the trust-level calculating module. Otherwise, the unknown features are sent back to the training set and update the current feature rules for next judging process.

① ②

4.3 Trust-Level Calculating

With determinations made by the abnormal judging module, trust-level calculating algorithm updates the current user’s trust-level and feeds back a quantitative “trust” value to the authentication server. Definition 3: A user’s trust-level is defined as follows:

Tu = 1 / m Sum(α klk )

(1 ≤ k ≤ m)

(2)

k

Tu denotes the trust-level of user u, and m is the amount of abnormal event kinds. α k is the influence rate of each kind of event, which is a real number between 0 and 1. lk is the occurrence number of event k. Consequently, Tu is in the range of [0, 1]. lk starts as 0 to reflect that there is no prior interaction between user and the authentication server (that is, unknown users). Supposing there are 3 kinds of abnormal event E1 , E2 , E3 and their trust rates are α 1 =0.9, α 2 =0.7, α 3 =0.5. The following tables separately shows Tu ’s going trend as all the kind of events are detected. Assuming that l1 , l2 , and l3 all follow the same increasing rate, we can find an interesting result: with the increasing of event kinds, the larger lk , the faster the Tu decreases.

812

L. Jin and Z. Lu Table 1. Tu on

l1 0 3 5 7 9

E1

increasing

l2 0 0 0 0 0

l3 0 0 0 0 0

Table 2. Tu on

l1 0 3 5 7 9

and

E2 increasing

l2 0 3 5 7 9 Table 3. Tu on

l1 0 3 5 7 9

E1

Tu 1.0000 0.9097 0.8635 0.8261 0.7958

l3 0 0 0 0 0

E1 , E2

l2 0 3 5 7 9

and

Tu 1.0000 0.6907 0.5862 0.5202 0.4759

E3 increasing l3 0 3 5 7 9

Tu 1.0000 0.3990 0.2633 0.1895 0.1432

5 An Example Assuming there is a file access control system. With the sensitivity S of files, all files can be divided into three classes A, B, and C ( S A > S B > S C ). To maintain secure levels of this system, we defines three different certificates C1 , C 2 and C 3 ( C 3 > C 2 > C1 ). Different combinations of certificates grant different privileges. Table 4. File access control policies File

Trust-level

A

0.7 ≤ TL < 1.0

B

0.4 ≤ TL < 0.7

C

0.1 ≤ TL < 0.4

Certificates

History Records

C1 , C 2 , C 3

0.7 ≤ AVGTL

C1 , C 2

0.4 ≤ AVGTL

C1

0.1 ≤ AVGTL

A Novel Secure Interoperation System

813

Access control policies are defined in Table 4.There are three kinds of abnormal events E1 , E2 , E3 and access control policy defines the lowest threshold of Tu , named Tu T , is 0.1000. Furthermore, E1 , E2 , E3 and their trust rate α1 , α 2 , α 3 are described as follows: Certificate_Error_Event: a user presents a needless certificate. Although it is valid, it is not the right one that authentication server needed. This event may indicate a certificate mistake of the user. The trust influence rate of this event is α1 =0.9. Certificate_Invalidation_Event: a user presents an expired, damaged or revoked certificate. This event may indicate an attempt to a network fraud. The trust influence rate of it is α 2 =0.7. Request_Overflow_Event: a user sends abnormally large amounts of requests. This event may indicate an attempt to a Dos attack or a virus intrusion. The trust influence rate of this event is α 3 =0.5.

Jimmy wants to access some files and sends a request with his identity certificate to the authentication server. To demonstrate the secure mechanism of ASITL, we assume three different possible results: Jimmy is a malicious intruder: He does not have a valid certificate at all. From the beginning, he sends expired or damaged certificates to the authentication server continually. Certificate_Invalidation_Event is detected constantly and the occurrence number of it increases fast. When the occurrence number reaches a threshold amount, Request_Overflow_Event may be detected. Once Jimmy’s TL is below 0.1, he will be forbidden by the system. And the final TL with his ID will be recorded in the history record. If this result continually takes place more than five times, the user ID will be recorded in the Black List. Jimmy is a potentially dangerous user: He only has a valid certificate C1 , so his privilege only can access the files of Class C. But his true intention is the more sensitive files of Class A or Class B. In order to accumulate a good reputation, he maintains a high TL ( 0.4 ≤ TL < 1.0 ) and AVGTL ( 0.4 ≤ AVGTL ) by validly accessing Class C files with certificate C1 . However, once he presents an invalid certificate

C 2 or C3 , Certificate_Invalidation_Event is trigged and his TL decreases fast. Although Jimmy has owned a high TL and a good history record by dealing with less sensitive files C, his potential intention of more sensitive files A or B can never be reached. Jimmy is a normal user: He sends file request and corresponding valid certificate to the authentication server. If his certificate is suited to the privilege of the request, his TL and history records can satisfy the access control policies, he will pass the authentication and his request will be responded by the application server.

6 Conclusions and Future Work The ASITL, which can supply secure interoperations for multi security domains, is guided by a set of desiderata for achieving a fine-grained access control system. In

814

L. Jin and Z. Lu

this paper, we introduce a variable value “trust-level” to reflect a user’s trust degree. Based on this value, ASITL dynamically evaluates the user’s trust degree and responds to the requestors through the judgment of a new suspicious event. Furthermore, ASITL can be sure that all secure measures have been completed before sensitive information is exchanged. In future work, we would like to extend our work to some new areas. We need find more efficient learning algorithms to shorten the responding period. Neural network algorithms or similar methods might be involved. Moreover, we can further optimize the cooperating abilities among modules in the system to enhance the performance of the system. Finally, trust evaluating for the authentication server and users’ privacy issues also need to be investigated.

References 1. Li, N., Mitchell J., Winsborough W.. RT: A role-based trust-management framework. In: Proceedings of The 3th DARPA Information Survivability Conference and Exposition (DISCEX III), Washington (2003) 201-212 2. Li Xiong, Ling Liu. PeerTrust: Supporting Reputation-Based Trust for Peer-to-Peer Electronic Communities. IEEE Transactions on Knowledge and Data Engineering, Vol.16, No.7 (2004) 843-857 3. Bhavani Thuraisingham,. Trust Management in a Distributed Environment. In: Proceedings of the 29th Annual International Computer Software and Application Conference, vol.2 (2005) 561-572 4. Elisa Bertino, Latifur R. Khan, Ravi Sandhu. Secure Knowledge Management: Confidentiality, Trust, and Privacy. IEEE Transactions on Systems, man, and Cybernetics. Vol. 36, No.3 (2006) 429-438 5. Kilho Shin, Hiroshi Yasuda. Provably Secure Anonymous Access Control for Heterogeneous Trusts. In: Proceedings of the First International Conference on Availability, Reliability and Security (2006) 24-33

Scalability Analysis of the SPEC OpenMP Benchmarks on Large-Scale Shared Memory Multiprocessors Karl F¨ urlinger1,2 , Michael Gerndt1 , and Jack Dongarra2 1

Lehrstuhl f¨ ur Rechnertechnik und Rechnerorganisation, Institut f¨ ur Informatik, Technische Universit¨ at M¨ unchen {fuerling, gerndt}@in.tum.de 2 Innovative Computing Laboratory, Department of Computer Science, University of Tennessee {karl, dongarra}@cs.utk.edu

Abstract. We present a detailed investigation of the scalability characteristics of the SPEC OpenMP benchmarks on large-scale shared memory multiprocessor machines. Our study is based on a tool that quantifies four well-defined overhead classes that can limit scalability – for each parallel region separately and for the application as a whole. Keywords: SPEC, Shared Memory Multiprocessors.

1

Introduction

OpenMP has emerged as the predominant programming paradigm for scientific applications on shared memory multiprocessor machines. The OpenMP SPEC benchmarks were published in 2001 to allow for a representative way to compare the performance of various platforms. Since OpenMP is based on compiler directives, the compiler and the accompanying OpenMP runtime system can have a significant influence on the achieved performance. In this paper we present a detailed investigation of the scalability characteristics of the SPEC benchmarks on large-scale shared memory multiprocessor machines. Instead of just measuring each application’s runtime for increasing processor counts, our study is more detailed by measuring four well-defined sources of overhead that can limit the scalability and by performing the analysis not only for the overall program but also for each individual parallel region separately. The rest of this paper is organized as follows. In Sect. 2 we provide a brief overview of the SPEC OpenMP benchmarks and their main characteristics. In Sect. 3 we describe the methodology by which we performed the scalability analysis and the tool which we used for it. Sect. 4 presents the results of our study, while we discuss related work in Sect. 5 and conclude in Sect. 6. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 815–822, 2007. c Springer-Verlag Berlin Heidelberg 2007 

816

2

K. F¨ urlinger, M. Gerndt, and J. Dongarra

The SPEC OpenMP Benchmarks

The SPEC OpenMP benchmarks come in two variants. The medium variant (SPEC-OMPM) is designed for up to 32 processors and the 11 applications contained in this suite were created by parallelizing the corresponding SPEC CPU applications. The large variant (SPEC-OMPL) is based on the medium variant (with code-modifications to increase scalability) but two applications (galgel and ammp) have been omitted and a larger data set is used. Due to space limitations we omit a textual description of the background, purpose, and implementation of each application, please refer to [7] for such a description. Instead, Table 1 lists the main characteristics of each application with respect to the OpenMP constructs used for parallelization (suffix m denotes the medium variant, while suffix l denotes the large variant of each application).

3

2 2

2 12 17 18 7 2 4 23 18 6 1 8 2 2

7

400000 1 1

2 12 9 10 2 2 4 23 18 6 1 4 1 5

PARALLEL SECTIONS

PARALLEL LOOP

1

PARALLEL

11

LOCK

CRITICAL

wupwise m/wupwise l swim m swim l mgrid m/mgrid l applu m applu l galgel m equake m equake l apsi m apsi l gafort m/gafort l fma3d m fma3d l art m/art l ammp m

LOOP

BARRIER

Table 1. The OpenMP constructs used in each of the applications of the SPECOpenMP benchmark suite

3 8 10 13 12 26 9 8 1 10 1 29 47 3 5

1

Scalability Analysis Methodology

We performed the scalability study with our own OpenMP profiling tool, ompP [4,5]. ompP delivers a text-based profiling report at program termination that is meant to be easily comprehensible by the user. As opposed to standard subroutine-based profiling tools like gprof [6], ompP is able to report timing data and execution counts directly for various OpenMP constructs.

Scalability Analysis of the SPEC OpenMP Benchmarks

817

In addition to giving flat region profiles (number of invocations, total execution time), ompP performs overhead analysis, where four well-defined overhead classes (synchronization, load imbalance, thread management, and limited parallelism) are quantitatively evaluated. The overhead analysis is based on the categorization of the execution times reported by ompP into one of the four overhead classes. For example, time in an explicit (user-added) OpenMP barrier is considered to be synchronization overhead. Table 2. The timing categories reported by ompP for the different OpenMP constructs and their categorization as overheads by ompP’s overhead analysis. (S) corresponds to synchronization overhead, (I) represents overhead due to imbalance, (L) denotes limited parallelism overhead, and (M) signals thread management overhead.

MASTER ATOMIC BARRIER USER REGION LOOP CRITICAL LOCK SECTIONS SINGLE PARALLEL PARALLEL LOOP PARALLEL SECTIONS

seqT •

• • •

execT • (S) • (S) • • • • • • • • •

bodyT

• • • • •

exitBarT

• (I) • (I/L) • (L) • (I) • (I) • (I/L)

enterT

exitT

• (S) • (S)

• (M) • (M)

• (M) • (M) • (M)

• (M) • (M) • (M)

Table 2 shows the details of the overhead classification performed by ompP. This table lists the timing categories reported by ompP (execT, enterT, etc.) for various OpenMP constructs (BARRIER, LOOP, etc.) A timing category is reported by ompP if a ‘•’ is present and S, I, L, and M indicate to which overhead class a time is attributed. A detailed description of the motivation for this classification can be found in [5]. A single profiling run with a certain thread count gives the overheads according to the presented model for each parallel region separately and for the program as a whole. By performing the overhead analysis for increasing thread numbers, scalability graphs as shown in Fig. 1 are generated by a set of perl scripts that come with ompP. These graphs show the accumulated runtimes over all threads, the “Work” category is computed by subtracting all overheads form the total accumulated execution time. Note that a perfectly scaling code would give a constant total accumulated execution time (i.e., a horizontal line) in this kind of graph if a fixed dataset is used (as is the case for our analysis of the SPEC OpenMP benchmarks.

818

4

K. F¨ urlinger, M. Gerndt, and J. Dongarra

Results

We have analyzed the scalability of the SPEC benchmarks on two cc-NUMA machines. We ran the medium size benchmarks from 2 to 32 processors on a 32 processor SGI Alitx 3700 Bx2 machine (1.6 GHz, 6 MByte L3-Cache) while the tests with SPEC-OMPL (from 32 to 128 processors, with increments of 16) have been performed on a node of a larger Altix 4700 machine with the same type of processor. The main differences to the older Altix 3700 Bx2 are an upgraded interconnect network (NumaLink4) and a faster connection to the memory subsystem. Every effort has been made to ensure that the applications we have analyzed are optimized like “production” code. To this end, we used the same compiler flags and runtime environment settings that have been used by SGI in the SPEC submission runs (this information is listed in the SPEC submission reports) and we were able to achieve performance numbers that were within the range of variations to be expected from the slightly different hardware and software environment. The following text discusses the scalability properties we were able to identify in our study. Due to space limitations we can not present a scalability graph for each application or even for each parallel region of each application. Fig. 1 shows the most interesting scalability graphs of the SPEC OpenMP benchmarks we have discovered. We also have to limit the discussion to the most interesting phenomena visible and can not discuss each application. Results for the medium variant (SPEC-OMPM): wupwise m: This application scales well from 2 to 32 threads, the most significant overhead visible is load imbalance increasing almost linearly with the number of threads used (it is less than 1% for 2 threads and rises to almost 12% of aggregated execution time for 32 threads). Most of this overhead is incurred in two time-consuming parallel loops (muldoe.f 63-145 and muldeo.f 63-145). swim m: This code scales very well from 2 to 32 threads. The only discernible overhead is a slight load imbalance in two parallel loops (swim.f 284-294 and swim.f 340-352), each contributing about 1.2% overhead with respect to the aggregated execution time for 32 threads. mgrid m: This code scales relatively poorly (cf. Fig. 1a). Almost all of the application’s 12 parallel loops contribute to the bad scaling behavior with increasingly severe load imbalance. As shown in Fig. 1a, there appears to be markedly reduced load imbalance for 32 and 16 threads. Investigating this issue further we discovered that this behavior is only present in three of the application’s parallel loops (mgrid.f 265-301, mgrid.f 317-344, and mgrid.f 360-384). A source-code analysis of these loops reveals that in all three instances, the loops are always executed with an iteration count that is a power of two (which ranges from 2 to 256 for the ref dataset). Hence, thread counts that are not powers of two generally exhibit more imbalance than powers of two.

Scalability Analysis of the SPEC OpenMP Benchmarks

314.ovhds.dat 5000

3000

Mgmt Imbal

Limpar Imbal

2000

Sync 3000

316.ovhds.dat Mgmt

2500

Limpar 4000

Work

Sync Work

1500

2000

1000

1000

500

0

0 2

4

8

12

16

20

24

28

32

2

4

(a) mgrid m.

3500 3000 2500 2000

819

8

12

16

20

24

28

32

24

28

32

(b) applu m.

318.ovhds.dat

1600

Mgmt

1400

Limpar

318.R00034.ovhds.dat Mgmt Limpar

Imbal

1200

Imbal

Sync

1000

Sync

Work

Work 800

1500 600 1000

400

500

200

0

0 2

4

8

12

16

20

24

28

32

(c) galgel m.

1200

2

320.R00009.ovhds.dat

12000

Limpar

6000

400

4000

200

2000

0

313.ovhds.dat Limpar Sync

0

2

4

8

12

16

20

24

28

32

32

48

(e) equake m (quake.c 1310-1319). 315.ovhds.dat

64

80

96

112

128

96

112

128

(f) swim l.

50000

Mgmt

317.ovhds.dat Mgmt

Limpar

40000

Imbal 40000

20

Work

600

50000

16

Imbal 8000

Sync Work

60000

12

Mgmt 10000

Imbal 800

8

(d) galgel m (lapack.f90 5081-5092).

Mgmt 1000

4

Limpar Imbal

Sync

30000

Work

Sync Work

30000 20000 20000 10000

10000 0

0 32

48

64

80

(g) mgrid l.

96

112

128

32

48

64

80

(h) applu l.

Fig. 1. Scalability graphs for some of the applications of the SPEC OpenMP benchmark suite. Suffix m refers to the medium size benchmark, while l refers to the large scale benchmark. The x-axis denotes processor (thread) count and the y-axis is the accumulated time (over all threads) in seconds.

820

K. F¨ urlinger, M. Gerndt, and J. Dongarra

applu m: The interesting scalability graph of this application (Fig. 1b) shows super-linear speedup. This behavior can be attributed exclusively to one parallel region (ssor.f 138-209) in which most of the execution time is spent (this region contributes more than 80% of total execution time), the other parallel regions do not show a super-linear speedup. To investigate the reason for the super-linear speedup we used ompP’s ability to measure hardware performance counters. By common wisdom, the most likely cause of super-linear speedup is the increase in overall cache size that allows the application’s working set to fit into the cache for a certain number of processors. To test this hypothesis we measured the number of L3 cache misses incurred in the ssor.f 138-209 region and the results indicate that, in fact, this is the case. The total number of L3 cache misses (summed over all threads) is at 15 billion for 2 threads, and at 14.8 billion at 4 threads. At 8 threads the cache misses reduce to 3.7 billion, at 12 threads they are at 2.0 billion from where on the number stays approximately constant up to 32 threads. galgel m: This application scales very poorly (cf. Fig. 1c). The most significant sources of overhead that are accounted for by ompP are load imbalance and thread management overhead. There is also, however, a large fraction of overhead that is not accounted for by ompP. A more detailed analysis of the contributing factors reveals that in particular one small parallel loop contributes to the bad scaling behavior: lapack.f90 5081-5092. The scaling graph of this region is shown in Fig. 1d. The accumulated runtime for 2 to 32 threads increases from 107.9 to 1349.1 seconds (i.e., the 32 thread version is only about 13% faster (wall-clock time) than the 2 processor execution). equake m: Scales relatively poorly. A major contributor to the bad scalability is the small parallel loop at quake.c 1310-1319. The contribution to the wall-clock runtime of this region increases from 10.4% (2 threads) to 23.2% (32 threads). Its bad scaling behavior (Fig. 1e) is a major limiting factor for the application’s overall scaling ability. apsi m: This code scales poorly from 2 to 4 processors but from there on the scaling is good. The largest identifiable overheads are imbalances in the application’s parallel loops. Results for the large variant (SPEC-OMPL): wupwise l: This application continues to scale well up to 128 processors. However, the imbalance overhead already visible in the medium variant increases in severity. swim l: The dominating source of inefficiency in this application is thread management overhead that dramatically increases in severity from 32 to 128 threads (cf. 1f). The main source is the reduction of three scalar variables in the small parallel loop swim.f 116-126. At 128 threads more than 6 percent of total accumulated runtime are spent in this reduction operation. The time for the reduction is actually larger than the time spent in the body of the parallel loop. mgrid l: This application (cf. 1g) shows a similar behavior as the medium variant. Again lower numbers are encountered for thread counts that are powers

Scalability Analysis of the SPEC OpenMP Benchmarks

821

of two. The overheads (mostly imbalance and thread management) however, dramatically increase in severity at 128 threads. applu l: Synchronization overhead is the most severe overhead of this application (cf. 1h). Two explicit barriers cause most of this overhead with severities of more than 10% of total accumulated runtime each. equake l: This code shows improved scaling behavior in comparison to the medium variant which results from code changes that have been performed.

5

Related Work

Saito et al. [7] analyze the published results of the SPEC-OMPM suite on large machines (32 processors and above) and describe planned changes for the – then upcoming – large variant of the benchmark suite. A paper of Sueyasu [8] analyzes the scalability of selected components of SPEC-OMPL in comparison with the medium variant. The experiments were performed on a Fujitsu Primepower HPC2500 system with 128 processors. A classification of the applications into good, poor, and super-linear is given and is more ore less in line with our results. No analysis on the level of individual parallel regions is performed and no attempt for a overhead classification is made in this publication. The work of Aslot et al. [1] describes static and dynamic characteristics of the SPEC-OMPM benchmark suite on a relatively small (4-way) UltraSPARC II system. Similar to our study, timing details are gathered on the basis of individual regions and a overhead analysis is performed that tries to account for the difference in observed and theoretical (Amdahl) speedup. While the authors of this study had to instrument their code and analyze the resulting data manually, our ompP tool performs this task automatically. Fredrickson et al. [3] have evaluated, among other benchmark codes, the performance characteristics of seven applications from the OpenMP benchmarks on a 72 processor Sun Fire 15K. In their findings, all applications scale well with the exception of swim and apsi (which is not in line with our results, as well as, e.g. [7]). This study also evaluates “OpenMP overhead” by counting the number of parallel regions and multiplying this number with an empirically determined overhead for creating a parallel region derived from an execution of the EPCC micro-benchmarks [2]. Compared to our approach, this methodology of estimating the OpenMP overhead is less flexible and accurate, as for example it does not account for load-imbalance situations and requires an empirical study to determine the “cost of a parallel region”. Note that in our study all OpenMPrelated overheads are accounted for, i.e., the work category does not contain any OpenMP related overhead.

6

Conclusion and Future Work

We have presented a scalability analysis of the medium and large variants of the SPEC OpenMP benchmarks. The applications show a widely different scaling behavior and we have demonstrated that our tool ompP can give interesting,

822

K. F¨ urlinger, M. Gerndt, and J. Dongarra

detailed insight into this behavior and can provide valuable hints towards an explanation for the underlying reason. Notably, our scalability methodology encompasses four well-defined overhead categories and offers insights into how the overheads change with increasing numbers of threads. Also, the analysis can be performed for individual parallel regions and as shown by the examples, the scaling behavior can be widely different. One badly scaling parallel region can have increasingly detrimental influence on an application’s overall scalability characteristics. Future work is planned along two directions. Firstly, we plan to exploit ompP’s ability to measure hardware performance counters to perform a more detailed analysis of memory access overheads. All modern processors allow the measurement of cache-related events (misses, references) that can be used for this purpose. Secondly, we plan to exploit the knowledge gathered in the analysis of the SPEC benchmarks for an optimization case study. Possible optimizations suggested by our study include the privatization of array variables, changes to the scheduling policy of loops and avoiding the usage of poorly implemented reduction operations.

References 1. Vishal Aslot and Rudolf Eigenmann. Performance characteristics of the SPEC OMP2001 benchmarks. SIGARCH Comput. Archit. News, 29(5):31–40, 2001. 2. J. Mark Bull and Darragh O’Neill. A microbenchmark suite for OpenMP 2.0. In Proceedings of the Third Workshop on OpenMP (EWOMP’01), Barcelona, Spain, September 2001. 3. Nathan R. Fredrickson, Ahmad Afsahi, and Ying Qian. Performance characteristics of OpenMP constructs, and application benchmarks on a large symmetric multiprocessor. In Proceedings of the 17th ACM International Conference on Supercomputing (ICS 2003), pages 140–149, San Francisco, CA, USA, 2003. ACM Press. 4. Karl F¨ urlinger and Michael Gerndt. ompP: A profiling tool for OpenMP. In Proceedings of the First International Workshop on OpenMP (IWOMP 2005), Eugene, Oregon, USA, May 2005. Accepted for publication. 5. Karl F¨ urlinger and Michael Gerndt. Analyzing overheads and scalability characteristics of OpenMP applications. In Proceedings of the Seventh International Meeting on High Performance Computing for Computational Science (VECPAR’06), Rio de Janeiro, Brasil, 2006. To appear. 6. Susan L. Graham, Peter B. Kessler, and Marshall K. McKusick. gprof: A call graph execution profiler. SIGPLAN Not., 17(6):120–126, 1982. 7. Hideki Saito, Greg Gaertner, Wesley B. Jones, Rudolf Eigenmann, Hidetoshi Iwashita, Ron Lieberman, G. Matthijs van Waveren, and Brian Whitney. Large system performance of SPEC OMP2001 benchmarks. In Proceedings of the 2002 International Symposium on High Performance Computing (ISHPC 2002), pages 370–379, London, UK, 2002. Springer-Verlag. 8. Naoki Sueyasu, Hidetoshi Iwashita, Kohichiro Hotta, Matthijs van Waveren, and Kenichi Miura. Scalability of SPEC OMP on Fujitsu PRIMEPOWER. In Proceedings of the Fourth Workshop on OpenMP (EWOMP’02), 2002.

Analysis of Linux Scheduling with VAMPIR Michael Kluge and Wolfgang E. Nagel Technische Universit¨ at Dresden, Dresden, Germany {Michael.Kluge,Wolfgang.Nagel}@tu-dresden.de

Abstract. Analyzing the scheduling behavior of an operating system becomes more and more interesting because multichip mainboards and Multi-Core CPUs are available for a wide variety of computer systems. Those system can range from a few CPU cores to thousands of cores. Up to now there is no tool available to visualize the scheduling behavior of a system running Linux. The Linux Kernel has an unique implementation of threads, each thread is treated as a process. In order to be able to analyze scheduling events within the kernel we have developed a method to dump all information needed to analyze process switches between CPUs into files. These data will then be analyzed using the VAMPIR tool. Traditional VAMPIR displays will be reused to visualize scheduling events. This approach allows to follow processes as they switch between CPUs as well as gathering statistical data, for example the the number of process switches.

1

Introduction

The VAMPIR [7] tool is widely used to analyze the behavior of parallel (MPI, OpenMP and pthreads) as well as sequential programs. This paper will demonstrate how the capabilities of VAMPIR can be used to analyze scheduling events within the Linux kernel. These events are gathered by a Linux kernel module that has been developed by the authors. This development has been motivated by an scheduling problem of an OpenMP program that will be used within this paper to demonstrate the application of the software. Linux itself is an operating system with growing market share in the HPC environment. Linux has its own way of implementing threads. A thread is not more than a process that shares some data with other processes. Within the Linux kernel there is no distinction between a thread and a process. Each thread also has its own process descriptor. So within this paper the terms ’thread’ and ’process’ do not differ much. Although we will talk about OpenMP threads, those threads are also handled by the Linux Kernel as normal processes when we are talking about scheduling. The first section gives an short overview about the state of the art in monitoring the Linux kernel. The next section is dedicated to our Linux kernel module and the output to OTF [1]. Within the third section will show how various VAMPIR displays that have been designed to analyze time lines of parallel programs or messages in MPI programs can be reused for an visual analysis of Linux scheduling events. This paper is closed by a short summary and an outlook. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 823–830, 2007. c Springer-Verlag Berlin Heidelberg 2007 

824

2

M. Kluge and W.E. Nagel

Analyzing Scheduling Events in the Linux Kernel

Analyzing scheduling events is an interesting piece within the whole field of performance analysis due to effects that can be traced back to a specific process placement or cache thrashing. Within this paper we are referring to a multiprogramming environment. This means that multiple programs do run in parallel on a given set of CPUs. The processes associated to these programs are not pinned to a specific CPU. Therefore the scheduler is free to place the processes as needed onto available CPUs. We have identified two main approaches to analyze the scheduling behavior of an specific system. The first idea is to instrument the kernel scheduler itself to monitor its actions. This would have the advantage of having insight into scheduler decisions. An other idea is an indirect approach. If the CPU number that process is running on over time is traced as well as information about the process state (running or suspended), the priority, the nice value, interactivity etc. one can show strength and weaknesses within the scheduler also. All the information needed for the second approach are available within the process descriptor in the Linux kernel. The information needed for the first approach is only locally available only within the scheduler implementation and not globally in the kernel. Opposite to that, the list of current tasks and their properties are available everywhere within the kernel. There is no exiting tool we have found that is able to gather those information described above. The Linux Trace Toolkit [5] collects information about processes but does not have any information about the CPU number a process is running on. There are tools that are able to instrument a kernel (like KernInst [11] or KTau [8]) that require a unique program to be written and put into the kernel. Monitoring the /proc file system [9] would be an solution but cannot provide the fine granularity needed. For AIX the AIX kernel trace facilities can be used to gather data about various events in the kernel [2]. For really fine grained monitoring of the process-to-CPU mapping and the process state we decided to try a different way.

3

Tracing Scheduling Events

Our approach utilizes the second idea from the section above. Because the information about all tasks on the system are available at each point in the Linux kernel, the main idea is to write a kernel module that dumps the information needed at specifiable time intervals. Some kind of GUI or automatic analysis tool could later be used to analyze this data. The advantage of a kernel module is the ability to load and unload the module as needed as well as the short time for recompilation after a change in the source code because the kernel itself is not being touched [4]. The design of the kernel module is as follows. A kernel thread is created and inspects all given threads at an adjustable time interval. The minimum time between two inspections is the so called ’kernel frequency’ which can be chosen

Analysis of Linux Scheduling with VAMPIR

825

at the kernel setup with 100, 250 or 1000 ticks per second. The kernel module is given a particular process id (PID) to watch. It will inspect this PID and all its children and will dump all needed information (actually CPU and process state) to the relayfs [12] interface. This way the user is able to select all processes or any part of the current process tree. On problem here are processes that get reparented. If a process finishes that still has child processes, those child processes will get the init process (1) as parent. If not all processes but a specific subset is traced, this child processes will vanish from the trace at this point in time. The kernel module itself can be started, configured and stopped via an interface that has been made available through the sysfs file system. So the kernel module can stay within the kernel without generating overhead when nothing needs to be measured. To dump the data gathered to the disk, a relatively new part of the Linux kernel, relayfs, is used. relayfs is an virtual file systems that has been designed for an efficient transfer of large amounts of data from the kernel to user space. It uses one thread per CPU to collect data through a kernel wide interface and to transfer the data. The data is collected inside relayfs within sub buffers. Only full sub buffers are transfered to the user space. On the user side, one thread per CPU is running to collect the full sub buffers and to write the data to a file (one file per CPU). This thread is sleeping until it gets a signal from the kernel that a full sub buffers is available. This approach is scalable and disturbs the measured tasks as less as possible. In summary the kernel module currently supports the following features: – enable/disable tracing from the user space on demand – tracing of user selectable processes or tracing the whole system – changing parameter settings from the user space (via sysfs)

4

Using VAMPIR to Analyze Scheduling Events

Now we have a collection of events that describes which process has been on which CPU in which state in the system at different timestamps. The amount of data can become very large and needs a tool to be analyzed. Some kind of visual and/or automatic analysis is needed here. There are basically two different things that we want to analyze from those trace files: 1. number of active processes on each CPU 2. following the different processes (and their current states) on the CPUs over the time As threads and processes are basically treated the same way by the Linux kernel, the hierarchical structure between all processes/threads is also known at this point. For the first application it is possible to count the tasks in the state ’runnable’ on each CPU. To actually be able to view this data the following approach have been identified:

826

M. Kluge and W.E. Nagel

– each CPU is mapped to what VAMPIR recognizes as a process – task switches can be show as an one byte message between the associated CPUs (processes) – forks and joins can also be shown as messages – the number of forks, joins and task switches per kernel tick are put into counters By using different message tags for the forks, joins and task switch different colors can be used within VAMPIR to make the display even more clear. The filter facilities of VAMPIR can be used to analyze CPU switches, forks or joins independently. Due to the zooming feature of VAMPIR (which updates each open display to the actual portion of the time line that is selected) it is possible to analyze the scheduling behavior over time. On the beginning all CPUs (processes for VAMPIR) do enter a function called ’0’. When the first process is scheduled onto a CPU is will leave this function and enter a function called ’1’. By following this idea we can have a very informative display about the number of runnable processes on the different CPUs. By looking at VAMPIR’s time line and the counter time line in parallel we already get a good feeling on what was happening on the system. For following the processes over different CPUs this scheme needs to be extended not only to have one VAMPIR process line for a CPU but to have multiple process lines per CPU where the real processes will be placed on. Those lines will be called a stream on that CPU from now. In this scenario, processes that enter a specific CPU will be placed on a free stream. So for each process one or two virtual function were defined for VAMPIR. One is always needed and denotes that on one stream a specific process ID is present. This can further be extended to have distinct virtual VAMPIR functions for the two states of a process (running/not running). In the second case we can generate a leave event for one virtual function and an enter event to the other virtual function on the same stream when a process switches its state. The idea of modeling task switches as messages allows to use VAMPIR’s Message Statistics window to analyze how many processes switched from one CPU to another and how often this took place for each CPU (from-to) pair.

5

OTF Converter

To be able to analyze the collected data with VAMPIR a tool is needed to convert the data dumped by the kernel module to a trace file. We have chosen to utilize the OTF library due to its easy handling. Within relayfs the data obtained by the kernel thread are dumped to the file that is associated with the CPU where the kernel thread is running on at this point in time. The converter has been written to serialize all events within these files after the program run and to follow the tasks when they jump between the CPUs. It generates OTF output with all necessary information like process names and state, CPU utilization together with various counters.

Analysis of Linux Scheduling with VAMPIR

827

The example we will look at within the next section creates about 1GB of trace data together from all CPUs. This example runs for about 6 minutes on 8 CPUs. The conversation to the OTF file format takes about one minute and results in OTF files between 100 and 120 MB.

6

Example

Our example is derived from a problem observed on our Intel Montecito test system. It has 4 Dual Core Itanium 2 CPUs running at 1.5 GHz (MT disabled). The multiprogramming capabilities of a similar system (SGI Altix 3700) have been investigated with the PARbench Tool [6], [3], [10]. One result here has been that an OpenMP parallelized program that is doing independent computation in all threads all the time (without accessing the memory) is behaving unexpected hin an overload situation. We put eight sequential tasks and eight parallel tasks (which open eight OpenMP threads each) on eight CPUs. So we have 72 active threads that all need CPU time and do hardly any memory access. The algorithm used on each tasks is the repeated (100000 times) calculation of 10000 Fibonacci numbers. The sequential version takes about 2 seconds to run. The OpenMP parallel program exists in two flavors. The first flavor has one big parallel section, 100000 ∗ 10000 numbers are calculated in one block. The second implementation opens and closes the OpenMP parallel section 100000 times to calculate 10000 Fibonacci numbers. One parallel task with 8 parallel threads also needs 2 seconds for both flavors. In the overload situation, all 72 threads did run concurrently on the system. If we used the first OpenMP implementation all 72 tasks/threads Table 1. Wall time in seconds of the sequential and parallel program version in different execution environments program

big OpenMP block small OpenMP block busy waiting yield CPU sequential 19 − 21 2−3 8 − 16 parallel 19 − 21 45 − 50 21 − 23

exited after about 20 seconds (+/- 1 second). If we use the second flavor, the eight sequential tasks exit after 2 to 3 seconds and the parallel tasks exit after 45 to 50 seconds. The explanation for the different behavior we found after the investigation with our tool. It is the fact, that for the first flavor the tasks do not synchronize. On an overloaded system the tasks get out of sync easily. The default behavior of the OpenMP implementation for a synchronization point is a busy wait for 200ms and a call to sleep() afterwards. That way the OpenMP threads for the first flavor do synchronize just once and they use their full timeslice to do calculation. In the second flavor the parallel task spend part of their time slice with busy waiting. By putting the busy waiting time to 0 by using export KMP_BLOCKTIME=0

828

M. Kluge and W.E. Nagel

Fig. 1. Screenshot of the full time line, note the three different phases

this can be improved. The sequential tasks exit after 8 to 16 seconds and the parallel tasks need between 21 and 23 seconds. The numbers are compiled in table 1. The VAMPIR screenshot for the scheduling time line for all three runs is given in Figure 1. All switches for a task from one CPU to another is marked by a (blue) line. From the beginning of the time line to about 1:50 min the run for the one big OpenMP block has taken place. Afterwards the OpenMP busy waiting example is executed. As the last example from about 5:30 minutes to the end of the time line the run with disabled busy waiting is shown. Figure 2 shows all switches from/to all CPUs. By zooming in and looking into the different parts of the time line, the following facts could be collected for the three different runs: 1. After spawning all the processes the system is balanced after a relatively short period of time. The load on the individual CPUs is well balanced. Almost no rescheduling occurs during this period of time. 2. For the second run the balancing of the system takes much longer. During the whole second run every few seconds there are some scheduling events where tasks switch between CPUs. The reason for this is that some tasks get suspended (after the busy wait time has elapsed) and the system needs to be re-balanced afterwards. 3. The third case again is very different. Tasks get suspended very often and awakened thus the CPU utilization jitters a lot (due to the short OpenMP regions and no busy waiting). For that reason the system never gets well balanced but due to the fact that there are no CPU cycles spent busy waiting this scenario has a shorter wall time than the second one.

Analysis of Linux Scheduling with VAMPIR

829

Fig. 2. Screenshot of all process switches

7

Conclusion

The work presented has two main results. First of all we designed a convenient measurement environment to collect scheduling events from the Linux kernel (a kernel module + relayfs). And we reused VAMPIR’s capabilities for a different purpose. Traditional displays from VAMPIR have been reinterpreted for our purposes and do provide very useful information to analyze the scheduling behavior of a Linux system. A test case has been investigated and the underlying problem has been identified. For the future there are various opportunities to follow. One very interesting idea is to correlate this information with a traditional program trace to be able to follow effect like cache thrashing or other things that are only analyzable by looking at the whole system and not only looking onto a single program trace obtained in user space. This work has also shown that short OpenMP sections in an overload situation on Linux is counterproductive. With busy waiting disabled this can be improved. This way the OpenMP threads do sleep while waiting on a barrier. For this there is a possibility that the Linux kernel classifies this threads as ’interactive’ and starts to shorten their timeslice. The authors wants to thank her colleges Andreas Kn¨ upfer, Holger Brunst, Guido Juckeland and Matthias Jurenz for useful discussions and a lot of ideas.

830

M. Kluge and W.E. Nagel

References 1. Andreas Kn¨ upfer, Ronny Brendel, Holger Brunst, Hartmut Mix, and Wolfgang E. Nagel. Introducing the Open Trace Format (OTF). In Vassil N. Alexandrov, Geert Dick van Albada, Peter M.A. Sloot, Jack Dongarra, Eds., Computational Science – ICCS 2006: 6th International Conference, Reading, UK, May 28-31, 2006. Proceedings, volume II of Lecture Notes in Computer Science. Springer Berlin / Heidelberg. 2. IBM. http://publib16.boulder.ibm.com/doc link/en US/a doc lib/aixprggd /genprogc/trace facility.htm. 3. M.A. Linn. Eine Programmierumgebung zur Messung der wechselseitigen Einflusse von Hintergrundlast und parallelem Programm. Technical Report J¨ ul-2416, Forschungszentrum J¨ ulich, 1990. 4. Robert Love. Linux Kernel Development (german translation). Number ISBN 3-8273-2247-2. ADDISON-WESLEY, 1 edition, 2005. 5. Mathieu Desnoyers and Michel R. Dagenais. Low Disturbancea Embedded System Tracing with Linux Trace Toolkit Next Generation. http://ltt.polymtl.ca, November 2006. 6. W.E. Nagel. Performance evaluation of multitasking in a multiprogramming environment. Technical Report KF-ZAM-IB-9004, Forschungszentrum J¨ ulich, 1990. 7. Wolfgang E. Nagel, Alfred Arnold, Michael Weber, Hans-Christian Hoppe, and Karl Solchenbach. VAMPIR: Visualization and Analysis of MPI Resources. In Supercomputer 63, Volume XII, Number 1, pages 69–80, 1996. 8. A. Nataraj, A. Malony, A. Morris, and S. Shende. Early Experiences with KTAU on the IBM BG/L. In Proceedings og EUROPAR 2006 Conference, LNCS 4128, pages 99–110. Springer, 2006. 9. redhat Documentation. http://www.redhat.com/docs/manuals/linux/RHL-7.3Manual/ref-guide/ch-proc.html, November 2006. 10. Rick Janda. SGI Altix: Auswertung des Laufzeitverhaltens mit neuen PARBenchKomponenten. Diplomarbeit, Technische Universit¨ at Dresden, June 2006. 11. Ariel Tamches and Barton P. Miller. Using dynamic kernel instrumentation for kernel and application tuning. In International Journal of High-Performance and Applications 13, 3, 1999. 12. Tom Zanussi et.al. relayfs home page. http://relayfs.sourceforge.net, November 2006.

An Interactive Graphical Environment for Code Optimization Jie Tao1 , Thomas Dressler2 , and Wolfgang Karl2 1

Institut f¨ ur Wissenschaftliches Rechnen Forschungszentrum Karlsruhe 76021 Karlsruhe, Germany [email protected] 2 Institut f¨ ur Technische Informatik Institut f¨ ur Technische InformatiK 76128 Karlsruhe, Germany [email protected]

Abstract. Applications usually do not show a satisfied initial performance and require optimization. This kind of optimization often covers a complete process, starting with gathering performance data, followed by performance visualization and analysis, up to bottleneck finding and code modification. In this paper we introduce DECO (Development Environment for Code Optimization), an interactive graphical interface that enables the user to conduct this whole process within a single environment. Keywords: Performance tools, visualization, cache optimization.

1

Introduction

General-purpose architectures are not tailored to applications. As a consequence, most applications usually do not show high performance as initially running on certain machines. Therefore, applications often have to be optimized for achieving expected performance metrics. This kind of optimization, however, is a quite tedious task for users and as a consequence various tools have been developed for providing support. First, users need a performance analyzer [1] or a visualization tool [7] capable of presenting the execution behavior and performance bottlenecks. These tools have to rely on profilers [4], counter interfaces [3], or simulators [6] to collect performance data. Hence, tools for data acquisition are required. In addition, users need platforms to perform transformations on the code. These platforms can be integrated in the analysis tool but usually exist as an individual toolkit. In summary, users need the support of a set of different tools. Actually, most tool vendors provide a complete toolset to help the user step-by-step conduct the optimization process, from understanding the runtime behavior to analyzing performance hotspots and detecting optimization objects. Intel, for example, has developed VTune Performance Analyzer for displaying critical code regions and Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 831–838, 2007. c Springer-Verlag Berlin Heidelberg 2007 

832

J. Tao, T. Dressler, and W. Karl

Thread Profiler for presenting thread interaction and contention [5]. The Center for Information Services and High Performance Computing at the University of Drednen developed Vampir [2] for performance visualization and Goofi [8] for supporting loop transformation in Fortran programs. Similarly, we implemented a series of tools for cache locality optimization. This includes a cache visualizer [9] which demonstrates cache problems and the reason for them, a data profiler which collects information about global cache events, a cache simulator [10] for delivering runtime cache activities, and a pattern analysis tool [11] for discovering the affinity among memory operations. By applying these tools to optimize applications, we found that it is inconvenient to use them separately. In this case, we developed the program development environment DECO in order to give the user a single interface for conducting the whole process of code tuning. A more challenging motivation for us to develop DECO is the user requirement of a platform to work with their programs. This platform must provide all necessary functionality for building and running an application. The third reason for a program development environment is that users need to compare the execution time or other performance metrics of different runs with both unoptimized and optimized code versions. It is more flexible if users could acquire this information within a single view than having to switch across several windows. These features are general and can be applied directly or with slight extension by other tool developers to build a development environment for their own need. For our purpose with cache locality optimization DECO has an additional property: it visualizes the output of the pattern analysis tool and maps the access pattern to the source code. The rest of the paper is organized as follows. We first describe the design and implementation of DECO in Section 2. In Section 3 we present its specific feature for cache locality optimization, together with some initial optimization results with small codes. In Section 4 the paper is concluded with a short summary and some future directions.

2

DECO Design and Implementation

The goal of DECO is to provide program developers an easy-to-use environment for step-by-step running the program, analyzing the performance, conducting optimization, executing the program again, and then studying the impact of optimizations. This is actually a feedback loop for continuous optimization. Users repeat this loop till a satisfied performance is achieved. For giving the user a simple view DECO provides all needed functionality for the whole loop in a single window. As depicted in Figure 1, this window consists of a menu bar and three visualization areas. The menu bar is located on the top of the window, with items for file operation, view configuration, project creation, executable building, and performance analyzing. The left field under the menu bar is a code editor, where the program source is displayed in different color for a better overview and an

An Interactive Graphical Environment for Code Optimization

833

Fig. 1. Main window of the program development environment

easy logic analysis. Next to the editor is a subwindow for demonstrating runtime results, such as execution time, cache miss statistics, communication overhead (e.g. for MPI programs), and analysis of specific constructs (e.g. parallel and critical regions in OpenMP programs). In addition, the visualization of access pattern, for our special need of cache locality optimization, is combined with this subwindow. The output and error reports delivered during the run of applications or tools are displayed at the bottom. Building Projects. DECO targets on realistic applications which are usually comprised of several files including head and Makefile. Hence, applications are regarded as project within DECO. Projects must be created for the first time and then can be loaded to the DECO environment using corresponding options in the menu bar. Figure 2 shows two sample subwindows for creating a project. First users have to give the project a name and the path where the associated files are stored. Then users can add the files that need analysis and potentially optimization into the project by simply select the object from the files at the given path, as illustrated on the top picture of figure 2. It is allowed to concurrently open a set of files, but only one file is active for editing. After modification of the source program users can generate an executable of the application using Menu Compile that contains several options like make and make clean, etc. Commands for running the application are also included. Tool Integration. DECO allows the user to add supporting tools into the environment. To insert a tool, information about the name, the path, and configuration parameters has to be provided. For this, DECO lists the potential parameters that could be specified as command line options for a specific toolkit. In case of the cache simulator, for example, it allows the user to specify parameters for the frontend, the simulator, and the application. Using this information, DECO builds an execution command for each individual tool and combines this command with the corresponding item in the menu Analysis of the main window. This means, tools can be started with menu

834

J. Tao, T. Dressler, and W. Karl

Fig. 2. Creating a new project

choice. During one tool is running, users can configure another tool or edit the applications. Execution Results. To enable a comparative study of different program versions, DECO visualizes the output of applications. Depending on the application, this output can be execution time, overhead for parallel programs, statistics on memory accesses, or time for synchronization of shared memory applications. The concrete example in Figure 1 visualizes the statistics on cache hits and misses.

3

Visualizing the Access Pattern

As mentioned, a specific feature of DECO for our cache locality optimization is the ability of depicting access patterns in the source code. This leads the user directly to the optimization object and more importantly shows the user how to optimize. In the following, we first give a brief introduction of the pattern analysis tool, and then describe the visualization in detail. Pattern Analyzer. The base of the analyzer [11] is a memory reference trace which records all runtime memory accesses. By applying algorithms, often used in Bioinformatics for pattern recognition, it finds the affinity and regularity between the references, like access chain and access stride. The former is a group of accesses that repeatedly occur together but target on different memory locations. This information can be used to perform data regrouping, a strategy for cache optimization, which packs successively requested data into the same cache block by continually defining the corresponding variables in the source code. The latter is the stride between accesses to neighboring elements of an array. This information can be used to guide prefetching, another cache optimization strategy, because it tells which data is next needed. Additionally, the pattern analyzer delivers the information whether an access is a cache hit or a cache miss. For a cache miss, it further calculates the push back distance which shows the number of steps a miss access must be put back

An Interactive Graphical Environment for Code Optimization

835

in order to achieve a cache hit. It is clear that users can use this information to change the order of memory accesses for a better cache hit ratio. However, it is difficult to apply this information in its original form based on virtual addresses and numbers. Therefore, DECO provides this specific property of mapping the pattern to the program. Visualization. Using DECO, patterns are displayed within the source code but also in a separate window for a deeper insight. Figure 3 is an example of the access chain. The window on the right side lists all detected chains with detailed description, including the ID, number of elements, frequency of occurrence, and the initial address. It is also possible to observe more detailed information of a single chain, for example, all accesses contained and the miss/hit feature of each access. This detailed information helps the user decide whether the grouping optimization is necessary. On the left side access chains are demonstrated in the source code. By clicking an individual chain in the right window, the corresponding variables and the location of references are immediately marked in the program. Users can utilize

Fig. 3. Presentation of access chains in two windows

Fig. 4. Visualization of access stride

836

J. Tao, T. Dressler, and W. Karl

this information to move the declarations of the associated variables into a single code line. Similarly, the push back distance is also combined with the program. For a selected variable DECO marks the first access to it in the code. Then the rest accesses are one-by-one marked with different colors presenting their hit/miss feature. In case of a cache miss, the position, where the reference has to be issued for avoiding the miss, is also marked with color. Based on this information users can decide if it is possible to shift an access. This optimization improves the temporal locality of single variables and is especially effective for variables in loops. The access stride for arrays can be observed in three different forms. First, an initial overview lists all detected strides with the information about start address, stride length, and the number of occurrence. A further view displays individual strides in detail, with descriptions of all references holding this stride. The third form is a diagram. The lower side of Figure 4 is an example. Within this diagram, all array elements holding the stride are depicted with small blocks. Each small block represents a byte; hence, an element usually consists of several blocks, depending on the type of the array. For example, the concrete diagram in Figure 4 actually demonstrates an array of 4-bytes elements and a stride of length one. The first block of each element is colored in order to exhibit the hit/miss feature. This helps the user reorganize the array structure or change the sequence of accesses to it, if large number of misses had been introduced by this array. The size of blocks can be adjusted using the spin-box at the right corner above the diagram. This allows the user to either observe the global access characteristics of the complete array or focus on a specific region of elements. Sample Optimization. We show two examples to demonstrate how to use this special feature of DECO for cache optimization. The first code implements a simple scheduler for user-level threads. With this scheduler the threads release the CPU resources by themselves, rather than being evicted after the timeslice. For saving registers by thread swap, a stack is managed for each thread. The current position at the stack has to be recorded in order to restore the registers when the thread acquires the timeslice again. We use a struct to store this stack pointer together with its ID and the stack. By thread switching the stack pointer has to be updated, where the pointer of the old thread has to be written back and the pointer of the new thread is fetched. The declaration of struct and the code for stack pointer update is depicted in Figure 5. We perform 1000 switchings between four threads with only the operation of processor release. The trace analysis detects five access chains that repeat more than 1000 times. Observing the chains in more detail, we find that among all accesses to the associated addresses in four of the five only three hits exist with each. Switching to the marked code lines in the source, we further detect that the corresponding accesses are performed for stack pointer update. This leads us to move the definition of stackPointer out of struct and define all the four pointers together. In this case, the number of L1 misses reduces from 4011 to 9. This significant improvement is the result of grouping.

An Interactive Graphical Environment for Code Optimization

837

Fig. 5. Source code of the thread scheduler

The second example performs matrix addition. The working set contains three matrices which is declared one after another. Initial run of this code reported 12288 L1 miss without any hits. The trace analysis detected three strides of length one, each corresponding to a single matrix. In principle, for such a stride many accesses shall be cache hit because with a size of 64 bytes one cache line can hold several elements. The only explanation lies in that there exists mapping conflict between all three matrices and this conflict results in the eviction of a matrix block out of cache before the other elements can be used. To eliminate this conflict, an efficient way is to insert a butter between two matrices so that the mapping behavior of the second matrix is changed. As the applied L1 is a 2-way cache, meaning that a cache line can hold two data blocks, we need only to alter the mapping behavior of one matrix. Here, we put a buffer of one cache line between the second and the third matrix. In this case, only 769 misses has been observed.

4

Conclusions

This paper introduces a program development environment that helps the user perform code optimization. This environment is implemented because we note that code optimization requires support of several tools and it is inconvenient to use them separately. Hence, our environment integrates different tools into a single realm. In addition, it provides a flexible interface for users to work with their programs and to study the impact of optimizations. A more specific feature of this environment is that it directly show access patterns in the source code. This allows the user to locate the optimization object and the strategy.

References 1. D. I. Brown, S. T. Hackstadt, A. D. Malony, and B. Mohr. Program Analysis Environments for Parallel Language Systems: The TAU Environment. In Proc. of the Workshop on Environments and Tools For Parallel Scientific Computing, pages 162–171, May 1994.

838

J. Tao, T. Dressler, and W. Karl

2. H. Brunst, H.-Ch. Hoppe, W. E. Nagel, and M. Winkler. Performance Optimization for Large Scale Computing: The Scalable VAMPIR Approach. In Computational Science - ICCS 2001, International Conference, volume 2074 of LNCS, pages 751– 760, 2001. 3. J. Dongarra, K. London, S. Moore, P. Mucci, and D. Terpstra. Using PAPI For Hardware Performance Monitoring On Linux Systems. In Linux Clusters: The HPC Revolution, June 2001. 4. J. Fenlason and R. Stallman. GNU gprof: The GNU Profiler. available at http://www.gnu.org/software/binutils/manual/gprof-2.9.1/html mono/gprof. html. 5. Intel Corporation. Intel Software Development Products. available at http://www.intel. com/cd/software/products/asmo-na/eng/index.htm. 6. P. S. Magnusson and B. Werner. Efficient Memory Simulation in SimICS. In Proceedings of the 8th Annual Simulation Symposium, Phoenix, Arizona, USA, April 1995. 7. B. P. Miller, M. D. Callaghan, J. M. Cargille, J. K. Hollingsworth, R. B. Irvin, K. L. Karavanic, K. Kunchithapadam, and T. Newhall. The Paradyn parallel performance measurement tool. IEEE Computer, 28(11):37–46, November 1995. 8. R. Mueller-Pfefferkorn, W. E. Nagel, and B. Trenkler. Optimizing Cache Access: A Tool for Source-To-Source Transformations and Real-Life Compiler Tests. In Euro-Par 2004, Parallel Processing, volume 3149 of LNCS, pages 72–81, 2004. 9. B. Quaing, J. Tao, and W. Karl. YACO: A User Conducted Visualization Tool for Supporting Cache Optimization. In High Performance Computing and Communcations: First International Conference, HPCC 2005. Proceedings, volume 3726 of Lecture Notes in Computer Science, pages 694–703, Sorrento, Italy, September 2005. 10. J. Tao and W. Karl. CacheIn: A Toolset for Comprehensive Cache Inspection. In Proceedings of ICCS 2005, volume 3515 of Lecture Notes in Computer Science, pages 182–190, May 2005. 11. J. Tao, S. Schloissnig, and W. Karl. Analysis of the Spatial and Temporal Locality in Data Accesses. In Proceedings of ICCS 2006, number 3992 in Lecture Notes in Computer Science, pages 502–509, May 2006.

Memory Allocation Tracing with VampirTrace Matthias Jurenz, Ronny Brendel, Andreas Kn¨upfer, Matthias M¨uller, and Wolfgang E. Nagel ZIH, TU Dresden, Germany

Abstract. The paper presents methods for instrumentation and measurement of applications’ memory allocation behavior over time. It provides some background about possible performance problems related to memory allocation as well as to memory allocator libraries. Then, different methods for data acquisition and representation are discussed. Finally, memory allocation tracing integrated in VampirTrace is demonstrated with a real-world HPC example application from aerodynamical simulation and optimization. Keywords: Tracing, Performance Analysis, Memory Allocation.

1 Introduction High Performance Computing (HPC) aims to achieve optimum performance on highend platforms. Always, the achievable performance is limited by one or more resources, like available processors, floating point throughput or communication bandwidth. Memory is another important resource, in particular for data intensive applications [8]. This paper presents methods for instrumentation and measurement of program’s memory allocation behavior. The integration into VampirTrace [9] provides additional information for trace-based performance analysis and visualization. The rest of the first Section discusses the influence of memory allocation to performance, introduces memory allocators and references some related work. The following Sections 2 and 3 show various instrumentation approaches and ways of representing the result data. In Section 4 the approach is demonstrated with a real-world application example before the final Section 5 gives conclusions and an outlook on future work. 1.1 Impact of Memory Allocation on Application Performance Memory consumption may cause notable performance effects on HPC applications, in particular for data intensive applications with very large memory requirements. There are three general categories of memory allocation related performance issues: – memory requirements as such – memory management overhead – memory accesses Firstly, excessive memory requirements by applications exceeding the available resources may lead to severe performance penalties. An oversized memory allocation Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 839–846, 2007. c Springer-Verlag Berlin Heidelberg 2007 

840

M. Jurenz et al.

request might make an application fail or cause memory paging to secondary storage. The latter might happen unnoticed but brings severe performance penalties. Secondly, memory management may cause unnecessary overhead - see also Section 1.2. This might be either due to frequent allocation/deallocation or due to memory placement which can cause so called memory fragmentation. Thirdly, accesses to allocated memory may cause performance problems. In general, tracing is unsuitable to track single memory accesses, because of the disproportional overhead. This is covered by many tools already by means of hardware counters [4]. 1.2 Memory Allocators The memory allocator implements the software interface for applications to request memory. For example in C it contains the malloc, realloc and free functions and few more. Only the memory allocator is communicating with the operating system kernel to actually request additional memory pages in the virtual address space. Usually, memory allocators handle small requests differently than medium or large requests. This is done for two reasons. Firstly, because the operating system can partition memory only in multiples of the virtual memory page size. And secondly, to reduce run-time overhead for small requests. Reserving a whole page for very small requests would cause a lot of overhead. Thus, multiple small requests can be placed in the same page. This page can only be released again, if all memory requests in it have been freed (memory fragmentation). However, the memory allocator can re-use freed parts for following requests with matching sizes. All allocators add a small amount of management data to the memory blocks delivered. Usually, this is one integer (or few) of machine’s address size, see Section 2.3 below. Furthermore, memory allocators will pre-allocate a number of pages from the operating system for small requests. By this means, it avoids issuing an expensive system call for every single small request. This provides a notable performance optimization esp. in multi-threaded situations [1]. Memory allocators come as part of a platform’s system library (libc) or as external libraries that can be linked to an application explicitly. Well known examples are the Lea allocator and the Hoard allocator library [1, 2]. 1.3 Related Work There is a large number of memory debugging tools available that check memory allocation/deallocation or even memory accesses. While such tools serve a different purpose than the presented work, they use similar techniques to intercept the allocation calls. One typical and small example is the ccmalloc library for debugging memory management and accesses. It uses dynamic wrapper functions to intercept memory management function calls [3]. The MemProf tool allows memory usage profiling [14]. It provides summary information about memory allocation per function. Valgrind is a very powerful memory debugger and profiler [15]. It offers more sophisticated debugging techniques to detect bugs in memory allocation/deallocation as well as invalid accesses. Unlike the previous examples, Valgrind depends on simulation of target applications.

Memory Allocation Tracing with VampirTrace

841

Even though, this is highly optimized, it causes notable slowdown of target applications which is well acceptable for debugging but not for parallel performance analysis. Besides debuggin tools, there are performance analysis tools focusing on memory allocation. The TAU tool set offers special operation modes for profiling and tracing of memory allocation [13]. It employs preprocessor instrumentation for memory management calls and takes advantage of platform specific interfaces (Cray XT3). Also, it introduces the memory headroom metric for remaining heap memory, see Section 2.3. Other existing performance measurement and analysis tools focus on memory access behavior. This is done either with summary information provided by PAPI counters [4] or with even more detailed information which are achievable by simulation only [6].

2 Instrumentation of Allocation/De-Allocation Operations Instrumentation is the process of inserting measurement code into a target application. There are various general instrumentation techniques. They can be applied at source code level, at compilation or linking time or at binary level. The different approaches vary in terms of platform dependence, in terms of measurement overhead and even in terms of expressiveness. Below, some approaches are discussed with special focus on analysis of memory allocation. 2.1 The proc File System Firstly, there are interfaces to query general process information. One of the most commonly known is the proc file system. It provides system wide information as well as process specific data. In /proc/meminfo the total available memory of the system is given among other details. This might not always be the amount of memory actually available to a process because of user limits, machine partitioning or other means. In /proc//statm there is information about the memory usage of the process with the given PID. It provides current memory size of heap and stack in multiples of the page size (usually 4 KB). This interface is widely supported but is rather slow in terms of query speed. For our experiments we use it optionally to determine total memory consumption. 2.2 The mallinfo Interface The mallinfo interface, which provides detailed allocation statistics, is provided by many allocators. Although it returns all desired information, is platform independent and has fast query speed and yet it is unusable for our tool. Unfortunately, it uses 32 bit integers by design, thus being incapable of reporting about memory intensive applications in 64 bit address space. 2.3 Autonomous Recording An autonomous approach can be implemented by intercepting all memory allocation operations via wrapper functions1. Wrappers evaluate the function’s arguments, e.g. 1

Wrapper functions replace calls to a target but call it again itself. Besides the original purpose of the function, additional tasks like logging, measurement, checking, etc. can be issued.

842

M. Jurenz et al.

memory size in malloc(), which are essential for recording memory consumption. However, for some calls the corresponding memory sizes are unknown, e.g. for free(). It is necessary to infer the size of deallocated memory area at this point in order to correctly update the record. Explicitly storing this information would create memory overhead. Fortunately, memory managers do provide it. Usually, it is stored as an integer at the position right in front of the very memory area. It can be accessed either by the malloc usable size function or directly. Note that this deals with the memory allocated, which is greater than or equal to the amount requested. For consistence out approach always reports the amount of memory actually used, including memory management overhead. This is greater or equal to user application requests. Internally, the amount of memory currently allocated is stored in thread-private variables within the measurement infrastructure. Multiple threads in the same process will record their memory consumption separately. This avoids unnecessary thread synchronization. Below, there is a short discussion of different ways to intercept all memory management calls like malloc, realloc, free, etc. Pre-Processor Instrumentation. Compiler pre-processors can be used to replace all function calls in the source code with given names by alternative (wrapper) functions. Those need to be provided by the the trace library. With this approach it is possible to miss certain allocation operations hidden in library calls with inaccessible sources. This can cause falsified results. For example, memory allocated within third party libraries might be freed by the user application directly. Therefore, this approach is not advisable without special precautions. Malloc Hooks. The GNU glibc implementation provides a special hook mechanism that allows intercepting all calls to allocation and free functions. This is independent from compilation or source code access but relies on the underlying system library. Unlike the previous method, this is suitable for Fortran as well. It is very useful to guaranty a balanced recording, i.e. all allocation, re-allocation and free operations are captured. Similar mechanisms are provided by other libc implementations, e.g. on SGI IRIX. This approach requires to change internal function pointers in a non-thread-safe way! It requires explicit locking if used in a multi-threaded environment. Nevertheless, this is the default way for MPI-only programs in our implementation. Library Pre-Load Instrumentation. An alternative technique to intercept all allocation functions uses pre-loading of shared libraries. Within the instrumentation library there are alternative symbols for malloc, realloc, free, etc. which are preferred over all later symbols with the same name. The original memory management functions can be accessed explicitly by means of the dynamic loader library. This approach requires support for dynamic linking, which is very common but not available on all platforms (e.g. IBM’s BlueGene/L). Unlike the previous approach, it can be implemented in a thread-safe manner without locking. Exhausting Allocation. There is another remarkable strategy to detect the available amount of heap memory, the so called memory headroom. It counts the free heap memory and infers about the consumed memory (implying the total amount of memory

Memory Allocation Tracing with VampirTrace

843

is constant) [13]. Free heap memory is estimated by exhaustingly allocating as much memory as possible (with a O(log n) strategy). Presumably, this approach has a big impact on run-time overhead as well as the memory manager’s behavior. Furthermore, it will fail with optimistic memory managers like the default Linux one. 2.4 Measurement Overhead Overhead from instrumentation can occur in terms of additional memory consumption and in terms of run-time. There is almost no memory overhead from allocation tracing with the exception of one counter variable (64 bit integer) per process. The run-time overhead of malloc hooks is not detectable within the given accuracy of measurement of ≈ 5 %. It was tested on single-processor AMD Athlon64 machine with Linux OS and with a pathological worst case test. It issues 20, 000, 000 allocation requests of differing sizes without accessing the memory or any real computation. See Figure 1 for run-time results including wall clock time, system time and user time.

250

time [s]

200 150 100 50 0

no hooks with hooks average

Fig. 1. Run time overhead for instrumentation. The figure compares time without and with instrumentation for ten test runs with a pathological program. It shows total wall clock time (top), system time (middle) and user time (bottom) as well as the respective average values.

The test covered instrumentation only, no trace records were generated neither in memory buffers nor in actual trace files. The total tracing overhead may slightly increase due to the additional records which makes the records buffer flush slightly more often.

3 Representation of Result Data There is a common conception how various trace file formats store trace information. All employ so called trace records as atomic pieces of information. Representation (syntax) and expressiveness (semantics) vary between the existing formats, though [11]. There are two obvious options, how to represent memory allocation information. Firstly, to extend (a) trace format(s) by a novel record type with specified semantics. Secondly, to re-use an existing record type which is suitable to transport the information.

844

M. Jurenz et al.

3.1 Novel Record Types One or more new record types could be introduced specifically tailored towards memory allocation information. It might be special flavors of enter and leave record types enhanced by allocation information or special record types carrying only allocation information, that are to be used in addition to existing enter and leave records. There may be several sub-types for allocation, re-allocation and de-allocation or a common one. Both ways combine the same major advantage and disadvantage. On one hand, they allow to express more semantics, i.e. the presence of memory allocation information can reliably be determined by a specific record type. On the other hand, they would require any trace analysis tools to adapt explicitly to the new record types. We regard this as a major obstacle for general acceptance by performance analysis tool developers. 3.2 Generic Performance Counters The memory allocation over time can be mapped to ordinary performance counter records [4, 9]. Performance counter records are designed to represent scalar attributes over discrete time, which is very suitable for memory allocation information. Usually, for every enter and leave event there needs to be a samples with matching time stamp for every active counter. This results in two samples per function call per counter. Only so, the counter values can be associated to function calls. This is unfavorable for memory allocation information because it changes rather infrequently. Instead, the memory allocation counter is only updated on actual changes. Every update of this counter specifies a constant value for memory consumption from the current time until the next sample2 . Thus, the memory consumption is represented as a step function (piecewise constant) with very low overhead in terms of extra records.

4 Application Example The example that memory allocation tracing was tested with originates from an aerodynamics application. The sequential application consists of two components for simulation and for optimization of the simulation results. The first part is the flow solver TAUij from the German Aerospace Center (DLR), which is a quasi 2D version of TAUijk [2]. It solves the quasi 2D Euler equations around airfoils and computes the aerodynamic coefficients for drag, lift and pitching moment. The second part uses the ADOL-C software package for Automatic Differentiation (AD). ADOL-C attaches to TAUij by overloading variable types and operators and provides mechanisms to evaluate various kinds of derivations of the original program’s computation. Here, it is used to compute the gradient of the drag coefficient with respect to the input variables controlling the wing geometry. The gradient is then used for optimizing the wing shape, which involves iterated simulation and gradient evaluation. See [5, 7, 12] for more details and background information. In the example the so called reverse mode of AD is applied to compute the gradient with respect to a large number of input variables. Basically, this requires to store all intermediate results of the computation in a so called memory tape. This is to be traversed 2

This behavior can be announced by setting a corresponding flag in the counter definition [10].

Memory Allocation Tracing with VampirTrace

845

Fig. 2. Vampir process timeline with counters MEM TOT ALLOC and MEM TOT PAGES

in reverse order in the stage of gradient computation. There are some optimizations for fixpoint iteration algorithms, but still it causes excessive memory requirements [7]. This is shown in the Vampir screenshot in Figure 2. The test-run consumes ≈ 15 GB of memory (MEM TOT ALLOC). The exact amount is neither constant nor easily predictable. The analysis revealed that the actual usage is only ≈ 5.4 GB (MEM TOT PAGES). The experiments involved additional PAPI counters that showed that the memory intensive sections of the application suffer from high miss rates of level three cache due to the linear traversal of the memory tape.

5 Conclusion and Outlook The paper discussed various methods for memory allocation tracing for HPC applications. They have been implemented in the VampirTrace measurement system [9] and used with a real-world example application. It was able to provide some insight in the program’s run-time behavior, that was not obviously visible without this approach. Future work will focus on alternative ways to query kernel statistics about memory pages assigned to processes. We would like to decrease the run-time overhead. Special kernel modules might be a solution. This is currently attempted for I/O tracing. We will apply memory allocation tracing to more HPC programs in the course of performance analysis to learn more about typical memory management behavior. The convenient integration into VampirTrace allows this by adding a mere run-time option.

Acknowledgments We’d like to thank Carsten Moldenhauer and Andrea Walther from IWR, TU Dresden as well as Nicolas R. Gauger and Ralf Heinrich from the Institute of Aerodynamics and Flow Technology at DLR Braunschweig for their support with the example application.

846

M. Jurenz et al.

References [1] E.D. Berger, K.S. McKinley, R.D. Blumofe, and P.R. Wilson. Hoard: A Scalable Memory Allocator for Multithreaded Applications. In Proc. of ASPLOS-IX, Cambridge, MA, 2000. [2] E.D. Berger, B.G. Zorn, and K.S. McKinley. Reconsidering custom memory allocation. In Proc. of OOPSLA’02, New York, NY, USA, 2002. ACM Press. [3] Armin Biere. ccmalloc. ETH Zurich, 2003. http://www.inf.ethz.ch/ personal/projects/ccmalloc/. [4] S. Browne, J. Dongarra, N. Garner, G. Ho, and P. Mucci. A Portable Programming Interface for Performance Evaluation on Modern Processors. The International Journal of High Performance Computing Applications, 14(3):189–204, 2000. [5] N. Gauger, A. Walther, C. Moldenhauer, and M. Widhalm. Automatic differentiation of an entire design chain with applications. In Jahresbericht der Arbeitsgemeinschaft Str¨omungen mit Abl¨osung STAB. 2006. [6] M. Gerndt and T. Li. Automated Analysis of Memory Access Behavior. In Proceedings of HIPS-HPGC 2005 and IPDPS 2005, Denver, Colorado, USA, Apr 2005. [7] A. Griewank, D. Juedes, and J. Utke. ADOL-C: A package for the automatic differentiation of algorithms written in C/C++. ACM Trans. Math. Softw., 22:131–167, 1996. [8] G. Juckeland, M. S. M¨uller, W. E. Nagel, and St. Pfl¨uger. Accessing Data on SGI Altix: An Experience with Reality. In In Proc. of WMPI-2006, Austin, TX, USA, Feb 2006. [9] Matthias Jurenz. VampirTrace Software and Documentation. ZIH, TU Dresden, Nov 2006. http://www.tu-dresden.de/zih/vampirtrace/. [10] Andreas Kn¨upfer, Ronny Brendel, Holger Brunst, Hartmut Mix, and Wolfgang E. Nagel. Introducing the Open Trace Format (OTF). In Proc. of ICCS 2006: 6’th Intl. Conference on Computational Science, Springer LNCS 3992, pages 526 – 533, Reading, UK, May 2006. [11] Bernd Mohr. Standardization of event traces considered harmful: or is an implementation of object-independent event trace monitoring and analysis systems possible? Journal: Environments and tools for parallel scientific computing, pages 103–124, 1993. [12] S. Schlenkrich, A. Walther, N.R. Gauger, and R. Heinrich. Differentiating Fixed Point Iterations with ADOL-C: Gradient Calculation for Fluid Dynamics. In Proc. of HPSC 2006. [13] S. Shende, A. D. Malony, A. Morris, and P. Beckman. Performance and memory evaluation using tau. In Proc. for Cray User’s Group Conference (CUG 2006), 2006. [14] Owen Taylor. MemProf. http://www.gnome.org/projects/memprof/. [15] Valgrind.org. Valgrind, 2006. http://valgrind.org/info/about.html.

Automatic Memory Access Analysis with Periscope Michael Gerndt and Edmond Kereku Technische Universit¨ at M¨ unchen Fakult¨ at f¨ ur Informatik I10, Boltzmannstr. 3, 85748 Garching [email protected]

Abstract. Periscope is a distributed automatic online performance analysis system for large scale parallel systems. It consists of a set of analysis agents distributed on the parallel machine. This article presents the support in Periscope for analyzing inefficiencies in the memory access behavior of the applications. It applies data structure specific analysis and is able to identify performance bottlenecks due to remote memory access on the Altix 4700 ccNUMA supercomputer. Keywords: Performance analysis, supercomputers, program tuning, memory accesses analysis.

1

Introduction

Performance analysis tools help users in writing efficient codes for current high performance machines. Since the architectures of today’s supercomputers with thousands of processors expose multiple hierarchical levels to the programmer, program optimization cannot be performed without experimentation. To tune applications, the user has to carefully balance the number of MPI processes vs the number of threads in a hybrid programming style, he has to distribute the data appropriately among the memories of the processors, has to optimize remote data accesses via message aggregation, prefetching, and asynchronous communication, and, finally, has to tune the performance of a single processor. Performance analysis tools can provide the user with measurements of the the program’s performance and thus can help him in finding the right transformations for performance improvement. Since measuring performance data and storing those data for further analysis in most tools is not a very scalable approach, most tools are limited to experiments on a small number of processors. To investigate the performance of large experiments, performance analysis has to be done online in a distributed fashion, eliminating the need to transport huge amounts of performance data through the parallel machine’s network and to store those data in files for further analysis. Periscope [5] is such a distributed online performance analysis tool. It consists of a set of autonomous agents that search for performance bottlenecks in a Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 847–854, 2007. c Springer-Verlag Berlin Heidelberg 2007 

848

M. Gerndt and E. Kereku

subset of the application’s processes and threads. The agents request measurements of the monitoring system, retrieve the data, and use the data to identify performance bottlenecks. The types of bottlenecks searched are formally defined in the APART Specification Language (ASL) [1,2]. The focus of this paper is on Periscope’s support for analyzing the memory access behavior of programs. Periscope searches not only for bottlenecks related to MPI and OpenMP, but also for inefficiencies in the memory accesses. Novel features are the support for identifying data structure-related bottlenecks and remote memory accesses bottlenecks in ccNUMA systems. The next section presents work related to the automatic performance analysis approach in Periscope. Section 3 presents Periscope’s architecture and its special features for memory access analysis. Section 5 presents several case studies. Section 6 gives a short summary.

2

Related Work

Several projects in the performance tools community are concerned with the automation of the performance analysis process. Paradyn’s [9] Performance Consultant automatically searches for performance bottlenecks in a running application by using a dynamic instrumentation approach. Based on hypotheses about potential performance problems, measurement probes are inserted into the running program. Recently MRNet [10] has been developed for the efficient collection of distributed performance data. The Expert [12] tool developed at Forschungszentrum J¨ ulich performs an automated post-mortem search for patterns of inefficient program execution in event traces. Potential problems with this approach are large data sets and long analysis times for long-running applications that hinder the application of this approach on larger parallel machines. Aksum [3], developed at the University of Vienna, is based on a source code instrumentation to capture profile-based performance data which is stored in a relational database. The data is then analyzed by a tool implemented in Java that performs an automatic search for performance problems based on JavaPSL, a Java version of ASL.

3

Architecture

Periscope consists of a graphical user interface based on Eclipse, a hierarchy of analysis agents and two separate monitoring systems (Figure 1). The graphical user interface allows the user to start up the analysis process and to inspect the results. The agent hierarchy performs the actual analysis. The node agents autonomously search for performance problems which have been specified with ASL. Typically, a node agent is started on each SMP node of the target machine. This node agent is responsible for the processes and threads on that node. Detected performance problems are reported to the master agent that communicates with the performance cockpit.

Automatic Memory Access Analysis with Periscope

849

Fig. 1. Periscope currently consists of a GUI based on Eclipse, a hierarchy of analysis agents, and two separate monitoring systems

The node agents access a performance monitoring system for obtaining the performance data required for the analysis. Periscope currently supports two different monitors, the Peridot monitor [4] developed in the Peridot project focusing on OpenMP and MPI performance data, and the EP-Cache monitor [8] developed in the EP-Cache project focusing on memory hierarchy information. The node agents perform a sequence of experiments. Each experiment lasts for a program phase, which is defined by the programmer, or for a predefined amount of execution time. Before a new experiment starts, an agent determines a new set of hypothetical performance problems based on the predefined ASL properties and the already found problems. It then requests the necessary performance data for proving the hypotheses and starts the experiment. After the experiment, the hypotheses are evaluated based on the performance data obtained from the monitor.

4

Monitoring Memory Accesses

The analysis of the application’s memory access behavior is based on the EPCache monitor. It allows to access the hardware counters of the machine for evaluating properties such as the example for L1-cache misses in Figure 2. The example demonstrates the specification of performance properties with ASL. The performance property shown here identifies a region with high L1cache miss rate. The data model, specified in ASL too, contains a class SeqPerf which contains a reference to a program region, a reference to a process, and a number of cache-related and other metrics. The instance of SeqPerf available in the property is called the property’s context. It defines the program region, e.g., a function, and the process for which the property is tested. A novel feature of the EP-Cache monitor is its data structure support. The node agent can request measurements for specific data structures via the

850

M. Gerndt and E. Kereku PROPERTY LC1ReadMissesOverMemRef(SeqPerf s){ CONDITION: s.lc1_data_read_miss/s.read_access > 0.01; CONFIDENCE: 1; SEVERITY: s.lc1_data_read_miss/s.read_access; } Fig. 2. This performance property identifies significant L1-cache miss rate

Measurement Request Interface (MRI) [6]. The node agent specifies a request via the MRI Request function. Its parameters determine, for example, that L1cache misses are to be measured for array ACOORD in a specific parallel loop in subroutine FOO. The request would be generated for a new experiment after the property LC1MissRate was proven in the previous experiment. To enable the node agent to specify such detailed requests, it requires static information about the program, e.g., it has to know that array ACOORD is accessed in that loop. This program information is generated by the source-tosource instrumenter developed for Periscope. It generates the information in an XML-based format called the Standard Intermediate Program Representation (SIR) [11]. This information is read by the agents when the analysis is started. At runtime the MRI request for measuring cache misses for a data structure is translated based on the debug information in the executable into the associated range of virtual addresses. The measurement is certainly only possible, if the hardware counters of the target processor do support such restricted counting. Our main target architecture is the Altix 4700, which was installed at the Leibniz Computing Centre in Munich. It consists of 4096 Itanium 2 processors which have a very extensive set of hardware counters. On Itanium 2, such measurements can be restricted to specific address ranges and thus can be executed without program intrusion. Another feature of the Itanium 2’s hardware counters exploited by Periscope is its support for counting memory accesses that last more than a given number of clock cycles. One of these events is DATA EAR CACHE LAT8 for example. This event returns the number of memory operations with a memory access latency of more than eight cycles. Similar events return the number of operations with a memory access latency greater than LAT4, LAT16, ... up to LAT4096 increasing by powers of 2. The counters can be used to identify those memory references that go to local memory on the ALTIX 4700 processor or to remote memory. Since the ALTIX is a ccNUMA system, all the processors can access all the memory, but, for efficieny reasons, most of the accesses should be to the processors local memory. Thus it is very important to identify non-local access behavior. Edmond Kereku specified appropriate ASL performance properties for such situation in his dissertation [7]. Based on elementary properties such as the LC1Miss-Property and similar properties for individual data structures and for remote memory accesses,

Automatic Memory Access Analysis with Periscope

851

higher-level properties can be deduced. For example, the Property UnbalancedDMissRateInThreads shown in Figure 3 gives information on the behavior across multiple threads. PROPERTY TEMPLATE UnbalancedDMissRateInThreads

(MRIParPef mpp){ LET const float threshold; float mean = mean_func_t(MissRateParFunc, mpp); float max = max_func_t(MissRateParFunc, mpp); float min = min_func_t(MissRateParFunc, mpp); float dev_to_max = max - mean; float dev_to_min = mean - min; float max_exec_time=MAX(mpp.parT[tid] WHERE tid IN mpp.nThreads); IN CONDITION : MAX(dev_to_max, dev_to_min) / mean > threshold; CONFIDENCE : 1; SEVERITY : MAX(dev_to_max, dev_to_min) / mean * max_exec_time; } PROPERTY UnbalancedDMissRateInThreads UnbalancedLC1DReadMissRatePar; PROPERTY UnbalancedDMissRateInThreads UnbalancedLC2DReadMissRatePar; PROPERTY UnbalancedDMissRateInThreads UnbalancedLC3DReadMissRatePar; Fig. 3. Performance properties with a similar specification can be expressed via property templates. This template specified an unbalanced miss rate across threads in a parallel OpenMP program.

The template UnbalancedDMissRateInThreads is used to create multiple properties for the different cache levels. The property template is parameterized by a function which is replaced in the specifications of the individual properties with a function returning the appropriate miss rate, i.e., for the L1 cache, L2, and L3 respectively.

5

Application Experiments

We used Periscope to analyze applications on the Itanium SMP nodes of the Infiniband Cluster at LRR/TUM. Table 1 shows the list of properties which we searched for during our experiments, the threshold set in the condition of the properties, and the number of regions on which the property holds for each of the

852

M. Gerndt and E. Kereku

Table 1. The list of properties searched on the test applications running on the Infiniband Cluster Property Threshold LU FT SWIM PGAUSS LC2DMissRate 5% 0 14 0 5 LC2DReadMissRate 5% 0 3 0 5 LC3DMissRate 2% 13 3 7 0 LC3DReadMissRate 2% 1 3 2 0

test applications. We did not count the data structures, only the code regions. The properties hold for a region if the cache miss rate is higher than 5% for LC2 and higher than 2% for LC3. Looking at the table, we can conclude that FT and PGAUSS have more L2 cache problems, while LU and SWIM have more L3 cache problems. We present here the results of Periscope’s automatic search for the LU decomposition application in the form of search paths. The paths start from the region where the search began and go down to the deepest subregion or data structure for which the property holds. We omit the severities associated with each evaluated property. Instead, we provide the measured miss rate for each of the search path’s regions and data structures. The search path which generated the property with the highest severity is marked italic. Please note that in some of the tests, the region with the highest miss rate does not necessarily have the property with the highest severity. The severity is also dependant on the region’s execution time, which for presentation reasons is not shown here. The results of the automatic search for LU Region Application Phase( USER REGION, ssor.f, 109 ) ( PARALLEL REGION, ssor.f, 120 ) ( WORKSHARE DO, ssor.f, 126 ) rsd( DATA STRUCTURE, ssor.f, 4 ) Application Phase( USER REGION, ssor.f, 109 ) ( PARALLEL REGION, ssor.f, 120 ) ( LOOP REGION, ssor.f, 149 ) jacld( CALL REGION, ssor.f, 156 ) jacld( SUB REGION, jacld.f, 5 ) ( WORKSHARE DO, jacld.f, 39 ) u( DATA STRUCTURE, jacld.f, 5) Application Phase( USER REGION, ssor.f, 109 ) ( PARALLEL REGION, ssor.f, 120 ) ( LOOP REGION, ssor.f, 149 ) blts( CALL REGION, ssor.f, 165 ) blts( SUB REGION, blts.f, 4 )

LC3 miss rate 0.038 0.029 0.029

0.038 0.037 0.030 0.030 0.040

0.038 0.037 0.052

LC3 read miss rate

0.029

Automatic Memory Access Analysis with Periscope ( WORKSHARE DO, blts.f, 75 ) v( DATA STRUCTURE, blts.f, 4 )

Application Phase( USER REGION, ssor.f, 109 ) ( PARALLEL REGION, ssor.f, 120 ) ( LOOP REGION, ssor.f, 184 ) buts( CALL REGION, ssor.f, 200 ) buts( SUB REGION, buts.f, 4 ) ( WORKSHARE DO, buts.f, 75 ) v( DATA STRUCTURE, buts.f, 4 ) Application Phase( USER REGION, ssor.f, 109 ) ( PARALLEL REGION, ssor.f, 120 ) ( WORKSHARE DO, ssor.f, 221 ) u( DATA STRUCTURE, ssor.f, 4 ) rsd( DATA STRUCTURE, ssor.f, 4 )

853

0.055 0.025

0.038 0.038 0.051 0.054 0.025 0.038 0.039 0.031 0.062

0.031 0.062

The search for L2 cache problems did not detect any problem, but as the results show, LU has L3 cache problems. In addition to L3DMissRate, our search refined to the property L3DReadMissRate too. As the search paths show, LC3DReadMissRate does not hold on the majority of the regions where LC3MissRate was proven. This means that most of the L3 cache problems in LU are write-related problems. The search path that discovered the most severe problem refined from the application phase to subroutine buts. The data structure v is the source of the problem. The most important data structures of LU are u and rsd. In fact, the variable v is the local name for rsd which is passed to subroutine buts as a parameter.

6

Summary

This paper presented the automatic memory access analysis support in Periscope. Periscope’s analysis is automatically running several experiments during a single program execution to incrementally search for performance bottlenecks. If no repetitive program phases are marked by the user, the application can even be automatically restarted to perform additional experiments. Periscope uses this approach to search for data structure-related memory access bottlenecks as well as for remote memory access bottlenecks in ccNUMA architectures. Due to the limitation in the number of performance counters of current processors, multiple experiments are required to evaluate all the performance hypotheses for critical program regions. The overall overhead of the analysis depends on the frequency measurements are taken. In our tests, the regions to be analyzed where outer loops or parallel regions which have significant runtime so that accesses to the performance counters did not have significant overhead.

854

M. Gerndt and E. Kereku

The strategies applied in refining for more detailed performance bottlenecks can be found in [7] and will be published elsewhere. Acknowledgments. This work is funded by the Deutsche Forschungsgemeinschaft under Contract No. GE 1635/1-1.

References 1. T. Fahringer, M. Gerndt, G. Riley, and J. Tr¨ aff. Knowledge specification for automatic performance analysis. APART Technical Report, www.fz-juelich.de/apart, 2001. 2. T. Fahringer, M. Gerndt, G. Riley, and J.L. Tr¨ aff. Specification of performance problems in MPI-programs with ASL. International Conference on Parallel Processing (ICPP’00), pp. 51 - 58, 2000. 3. T. Fahringer and C. Seragiotto. Aksum: A performance analysis tool for parallel and distributed applications. Performance Analysis and Grid Computing, Eds. V. Getov, M. Gerndt, A. Hoisie, A. Malony, B. Miller, Kluwer Academic Publisher, ISBN 1-4020-7693-2, pp. 189-210, 2003. 4. K. F¨ urlinger and M. Gerndt. Peridot: Towards automated runtime detection of performance bottlenecks. High Performance Computing in Science and Engineering, Garching 2004, pp. 193-202, Springer, 2005. 5. M. Gerndt, K. F¨ urlinger, and E. Kereku. Advanced techniques for performance analysis. Parallel Computing: Current&Future Issues of High-End Computing (Proceedings of the International Conference ParCo 2005), Eds: G.R. Joubert, W.E. Nagel, F.J. Peters, O. Plata, P. Tirado, E. Zapata, NIC Series Volume 33 ISBN 3-00-017352-8, pp. 15-26a, 2006. 6. M. Gerndt and E. Kereku. Monitoring Request Interface version 1.0. TUM Technical Report, 2003. 7. E. Kereku. Automatic Performance Analysis for Memory Hierarchies and Threaded Applications on SMP Systems. PhD thesis, Technische Universit¨ at M¨ unchen, 2006. 8. E. Kereku and M. Gerndt. The EP-Cache automatic monitoring system. International Conference on Parallel and Distributed Systems (PDCS 2005), 2005. 9. B.P. Miller, M.D. Callaghan, J.M. Cargille, J.K. Hollingsworth, R.B. Irvin, K.L. Karavanic, K. Kunchithapadam, and T. Newhall. The Paradyn parallel performance measurement tool. IEEE Computer, Vol. 28, No. 11, pp. 37-46, 1995. 10. Ph. C. Roth, D. C. Arnold, and B. P. Miller. MRNet: A software-based multicast/reduction network for scalable tools. SC2003, Phoenix, November, 2003. 11. C. Seragiotto, H. Truong, T. Fahringer, B. Mohr, M. Gerndt, and T. Li. Standardized Intermediate Representation for Fortran, Java, C and C++ programs. APART Working Group Technical Report, Institute for Software Science, University of Vienna, Octorber, 2004. 12. F. Wolf and B. Mohr. Automatic performance analysis of hybrid MPI/OpenMP applications. 11th Euromicro Conference on Parallel, Distributed and NetworkBased Processing, pp. 13 - 22, 2003.

A Regressive Problem Solver That Uses Knowledgelet Kuodi Jian Department of Information and Computer Science Metropolitan State University Saint Paul, Minnesota 55106-5000 [email protected]

Abstract. This paper presents a new idea of how to reduce search space by a general problem solver. The general problem solver, Regressive Total Order Planner with Knowledgelet (RTOPKLT), can be used in intelligent systems or in software agent architectures. Problem solving involves finding a sequence of available actions (operators) that can transfer an initial state to a goal state. Here, a problem is defined in terms of a set of logical assertions that define an initial state and a goal state. With too little information, reasoning and learning systems cannot perform in most cases. On the other hand, too much information can cause the performance of a problem solver to degrade, in both accuracy and efficiency. Therefore, it is important to determine what information is relevant and to organize this information in an easy to retrieve manner. Keywords: Software Agent, Artificial Intelligence, Planning.

1 Introduction In recent years, as the computing power increases steadily, computers are used in a wide variety of areas that need some intelligence. All of these intelligent systems need a problem solver. The problems that require a problem solver include: (1) generating a plan of action for a robot, (2) interpreting an utterance by reasoning about the goals a speaker is trying to achieve, (3) allocating the use of resources, and (4) automatically writing a program to solve a problem. The heart of a problem solver is a planner. There are many kinds of planners [1]. In this paper, we present a regressive planner that uses knowledgelet. A planner is complete if it is able to find every solution given that solutions exist and that the planner runs to its completion. There is a trade-offs between the response time and the completeness. We can improve the response time of a planner by using some criteria to prune the search space. But that will cause the loss of solutions to the problem. On the other hand, if a planner searches every possible space, then, the search space grows exponentially with the number of operators; the response time will deteriorate to the point of intolerable for many problems of decent size. This paper offers a solution that gives us both the quick response time and the completeness of solutions. The planner presented in this paper has the following characteristics: Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 855–863, 2007. © Springer-Verlag Berlin Heidelberg 2007

856

K. Jian

• • • •

It has fast response time since the planner searches only relevant operators, It can use existing relational database to store pieces of knowledge, called knowledgelet, It is built on proven regressive idea and is conceptually simple and complete, It is general, which means that it can be used in different areas with little or no changes.

The rest of the paper is organized as the following: section 2 gives the description of knowledgelet; section 3 presents the regressive general problem solver, RTOPKLT, and its application to a planning problem; and section 4 is the summary.

2 Knowledgelet A knowledgelet is an object that describes the relevant knowledge of solving a particular problem. By using knowledgelets, the search time by a problem solver can be greatly reduced. A knowledgelet consists of slots of data structure and can be stored in most existing databases (object databases will be the best fit). 2.1 Structure of Knowledgelet Knowledgelet constrains the world states by giving only subsets of operators (actions) that are relevant to the problem at hand. The search space can be an open world space or a Local Closed World (LCW) space [2]. In a LCW space, we assume that we know

Slot1: Name of the knowledgelet Slot2: Domain name : context description Slot3: Goal1: statement Initial condition1: partial solution. Initial condition2: partial solution. ……. Goal2: statement Initial condition1: partial solution. Initial condition2: partial solution. ……. …….

Slot4: Language building blocks : LCW or open world Operator set -

Fig. 1. The structure of a knowledgelet

A Regressive Problem Solver That Uses Knowledgelet

857

everything. If something is not positively stated, we conclude it is false. The benefit of using LCW is that we needn’t to record all the world states. Figure 1 is the diagram of a knowledgelet. From the diagram, we see that a knowledgelet has a name. This name can be used by a database as a key to store it. A knowledgelet is like a blob of knowledge with a domain name (also can be used as a key) and a context that describes the domain name. We all know the fact that the same words carry different meanings in different contexts. The domain name and its context will solve this ambiguity. In slot3, there are goals that can be searched. Under each goal, there is a set of partial solutions to the goal. Each partial solution is associated with an initial state. This means that the same goal can be achieved from different starting points. For example, in a block world, the goal of block C on top of block B that is in turn on top of block A can be achieved either from the initial state of (on A B) ∧ (on C Table) or from the initial sate of (on A Table) ∧ (on B Table) ∧ (on C Table). A partial solution is a partial plan (contains an ordered sequence of operators) that will change the world from an initial state to the goal state. To the regressive planner, this partial plan can be viewed as one big operator [3]. In slot4, there is a field that contains the language type and a field that contains available operator set. There are two language types: open world assumption and the Local Closed World (LCW) assumption. The LCW language building block is a LCW lingo in that particular domain; the open world language building block is an open world lingo in that particular domain. The language building blocks include sentences, axioms, rules, and constraints. One example of the constraints is the domain rules. Domain rules are the guiding principles that operate in a particular domain. They may specify a particular sequence of operators (for example, one operator must proceed another, etc.). If there is no existing partial plan in slot3 to match a goal, the planner will construct a plan from the available operators in slot4 with the help of the language lingo in the language type field. The search starts from slot1, slot2, slot3, and slot4. By searching the partial solution first, we improve the response time since we do not have to construct everything from scratch.

3 Regressive General Total Order Planner, RTOP In artificial intelligence, problem solving involves finding a sequence of actions (operators) that result in a goal state. A lot of real world problems can be cast into planning problems in terms of an initial state, a set of goal state conditions, and a set of relevant legal operators; each legal operator is defined in terms of preconditions and effects. Preconditions of an operator must be satisfied before the operator can be applied. All the reachable states comprise world space. A world state in a world space can be represented by state variables and can only be changed by legal operators. There are basically two ways to search through a world space. One is called “progression” and the other is called “regression.” A progressive planner searches

858

K. Jian

forward from the initial world state until it finds the goal state. A regressive planner searches backward from the goal state until it finds the initial state. As the search progresses, both the forward search algorithm and the backward search algorithm need to choose non-deterministically the next operator to consider. The planner discussed in this paper belongs to the regressive type. Since the answer returned by our planner is a total-ordered (represented by “ ˙ 0), the minus sign if Universe is initially contracting (B(0; r) < 0) and τ (r) is an arbitrary function of r. Remark 4. Also in this case for t = τ (r) ⇒ Y = 0 ⇒ B = 0 then, from Remark 2, t = T (r) and consequently τ (r) ≡ T (r). So we can calculate the function τ (r) from the initial value B(0; r) = r √ √ √ c k  2 k + c2  − 2 k arcsinh( c√2  ) k T (r) = ∓ 3 3 c 2

4

(21)

Study of the Behaviour of the Universe in Proximity of the Collapse/Generation Times by an Expansion in Fractional Power Series

Now we want to study the behaviour of the universe in proximity of the collapse/expansion times by an expansion in fractional (Puiseux) series.7 Remark 5. In proximity of the times of generation or collapse the evolution has the same behaviour apart from its initial geometry. In addition the function T (r) has approximately the same form in all of the three different cases ω1 = 0, ω1 > 0 and ω1 < 0. 4.1

Initial Principal Curvature ω1 Positive

We already remarked that it is not possible to solve (14) explicitly with respect to B, but we can approximate the exact solution by an opportune fractional power series (or Puiseux series):8 

 √  h Y = (22) − (h − Y )Y + h arctan √ 2k h−Y 7

8

In [11] the approximate explicit solution was obtained through an expansion in power series of the parametric equations, therefore by a double expansion in power series. It is not possible to expand the second member of (14) in a simple power series √ with respect to Y , but we can develop it in Mac Lauren series with respect to Y thus obtaining a fractional power series. As it is known the fractional power series are particular cases of Puiseux series (see e.g. [5]).

1002

I. Bochicchio and E. Laserra

√ 2 = √ Y 3 k

3 2

+

1 √ Y 5h 2k

5 2

+

28

3 √

h2

2k

Y

7 2

+ ···

(23)

By truncating the fractional series to the first term (with precision 3/2), we find 1 2 3 t − τ (r) = ± Y2 . (24) 3 k So in our approximation we found the same expression (8) that characterizes the case ω1 = 0: in proximity of the generation or collapse times, the r-shells expand or collapse with the same behaviour as in the case ω1 = 0 and the function T (r) has, approximately, the form (9), in agreement with [11]. 4.2

Initial Principal Curvature ω1 Negative

Also in this case, being not possible to solve (20) explicitly with respect to B, we can approximate the exact solution by a Puiseux series: √ √ √ √ c Y  2 k + c2 Y  k − 2 k arcsinh( c √2Yk ) = (25) 3 c3  2 √ 3 5 7 c2 ω Y 2 2Y 2 3 c4 ω 2 Y 2 √ − = √ 3 + √ 5 + ··· 3 k 10 2 k 2 112 2 k 2

(26)

By truncating the fractional series to the first term (with precision 32 ), we find 3 1 2 Y2 t − τ (r) = ± (27) 3 k So in our approximation we found again the same equation that characterizes the case ω1 = 0: in proximity of the generation or collapse times, the r-shells expand or collapse with the same behavior that in the case ω1 = 0. Moreover, also in this case, the function T (r) has, approximately, the form (9) (see also [11]).

References 1. Levi-Civita, T.: The Absolute Differential Calculus. Dover Publications Inc. (1926) 2. Bondi, H.: Spherically symmetrical models in general relativity. Monthly Notices 107 (1947) p. 410 3. Laserra, E.: Sul problema di Cauchy relativistico in un universo a simmetria spaziale sferica nello schema materia disgregata. Rendiconti di Matematica 2 (1982) p. 283 4. Laserra, E.: Dust universes with spatial spherical symmetry and euclidean initial hypersurfaces. Meccanica 20 (1985) 267–271 5. Amerio, L.: Analisi Matematica con elementi di Analisi Funzionale. Vol. 3 Parte I Utet

Evolution of a Spherical Universe

1003

6. Siegel, C. L.: Topics in Complex Function Theory. Elliptic Functions and Uniformization Theory. New York: Wiley 1 (1988) p. 98 7. Davenport, J. H., Siret Y., Tournier. E.: Computer Algebra: Systems and Algorithms for Algebraic Computation. 2nd ed. San Diego: Academic Press (1993) 90–92 8. Iovane, G., Laserra, E., Giordano, P.: Fractal Cantorian structures with spatial pseudo-spherical symmetry for a possible description of the actual segregated universe as a consequence of its primordial fluctuations. Chaos, Solitons & Fractals 22 (2004) 521–528 9. Bochicchio, I., Laserra, E.: Spherical Dust Model in General Relativity. Proceeding of the International Conference ”Inverse Problem and Applications ”, May 22-23 2006, Birsk, Russia, ISBN 5-86607-266-1 (2006) 144–148 10. Bochicchio, I., Laserra, E.: Influence of the Initial Spatial Curvature on the Evolution of a Spatially Spherical Universe (to appear in Mathematical Methods, Physical Models and Simulation) 11. Giordano, P., Iovane, G., Laserra, E.: El Naschie (∞) Cantorian Structures with spatial pseudo-spherical symmetry: a possibile description of the actual segregated Universe. Chaos, Solitons & Fractals 31 (2007) 1108–1117

On the Differentiable Structure of Meyer Wavelets Carlo Cattani1 and Luis M. S´ anchez Ruiz2 1

DiFarma, Universit` a di Salerno, Via Ponte Don Melillo 84084 Fisciano (SA)- Italy [email protected] 2 ETSID-Departamento de Matem´ atica Aplicada, Universidad Polit´ecnica de Valencia, 46022 Valencia, Spain [email protected]

Abstract. In this paper the differential (first order) properties of Meyer wavelets are investigated. Keywords: Meyer Wavelet, Connection coefficients, Refinable integrals. AMS-Classification — 35A35.

1

Introduction

Wavelets have widely been studied from a theoretical point of view for its many interesting properties, mainly related with multiresolution analysis such as generating orthonormal basis in L2 (R) as well as for the fact that they have proven to be extremely useful in many applications such as image processing, signal detection, geophysics, medicine or turbulent flows. More mathematically focussed differential equations and even non linear problems have also been studied with wavelets. Very often wavelets are compared with the Fourier basis (harmonic functions), however the basic difference is that the harmonic functions are bounded in the frequency domain (localized in frequency) while wavelets are bounded both in space and in frequency. Nevertheless a major drawback for wavelet theory is the existence of many different families of wavelets, giving some arbitrariness to the whole theory. Among the many families of wavelets the simplest choice is the one based on Haar functions. Despite their simplicity Haar wavelets have proven their suitability for dealing with problems in which piecewise constant functions or functions with sharp discontinuities appear. The scaling function is the box function defined by the characteristic function χ[0,1] and its Fourier transform, up to constants or phase factor, is a function of the 

Work partially supported by Regione Campania under contract “Modelli nonlineari di materiali compositi per applicazioni di nanotecnologia chimica-biomedica”, LR 28/5/02 n. 5, Finanziamenti 2003, by Miur under contract “Modelli non Lineari per Applicazioni Tecnologiche e Biomediche di Materiali non Convenzionali”, Univ. di Salerno, Processo di internazionalizzazione del sistema universitario, D.M. 5 agosto 2004 n. 262 - ART. 23 and by “Applications from Analysis and Topology” - APLANTOP, Generalitat Valenciana 2005.

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1004–1011, 2007. c Springer-Verlag Berlin Heidelberg 2007 

On the Differentiable Structure of Meyer Wavelets

1005

sin(πt) , also called sinc-function. By exchanging the role of the variable type πt and frequency space, i.e. assuming as Fourier transform the box function and the sinc-function in the space of variable, we can construct the so-called Shannon wavelets [2]. In a more general approach they can be derived from the real part of harmonic (complex) wavelets [2] and its scaling function may be obtained by choosing as such a function in the Fourier space such that it satisfies various conditions for a scaling function and then find the wavelet basis. In fact the Haar and Shannon systems reverse the roles of the kind of scaling and wavelet functions. Very recently the connection coefficients both of Shannon wavelets and harmonic wavelets [2] have been explicitly computed at any ordered. Let us recall that a drawback of the Shannon wavelet in the time domain is its slow decay which has been improved by smoothing the scaling function in the frequency space, e.g. by means of the Meyer scaling function that instead of enjoying a sharp discontinuity, uses a smooth function in order to interpolate between its 1 and 0 values [4]. In this paper we study the differentiable structure of Meyer wavelets and, in particular, its connection coefficients in the line of the aformentioned results obtained for the Harmonic [2] and Shannon wavelet.

2

Meyer’s Wavelets

Meyer’s wavelets are defined in a such a way to avoid the slow decay (of compact support frequency domain wavelets) in the space domain. In order to do this one needs a continuous interpolating even function ν (ω) defined on R, [0, 1] −valued which is proportional to ω n+1 (see the interpolating polynomials in table 1). There follows that the Meyer scaling function is given by [4] ⎧ , |ω| < 2π ⎪ 3 ⎨1  3  2π 4π  φ(ω) = cos π2 ν 2π ≤ |ω| ≤ |ω| − 1 , ⎪ 3 3 ⎩ 0 , |ω| > 4π 3

(1)

where ν(x) (see Table 1) is an interpolating polynomial (see [4]).

Table 1. Interpolating Polynomials

n 0 1 2 3 4 5 6

ν(ω) ω ω 2 (3  − 2 ω)  ω 3 10 − 15 ω + 6 ω 2  ω 4 35 − 84 ω + 70 ω 2 − 20 ω 3  ω 5 126 − 420 ω + 540 ω 2 − 315 ω 3 + 70 ω 4  ω 6 462 − 1980 ω + 3465 ω 2 − 3080 ω 3 + 1386 ω 4 − 252 ω 5  ω 7 1716 − 9009 ω + 20020 ω 2 − 24024 ω 3 + 16380 ω 4 − 6006 ω 5 + 924 ω 6

1006

C. Cattani and L.M. S´ anchez Ruiz

If we define as a characteristic function 2 4 1 , π < |ω| < π χ(ω) = 3 3 0 , elsewhere the scaling function can be written as

3 3  φ(ω) = χ(ω + 23 π) + 12 eiπν ( 2π |ω|−1)/2 + e−iπν ( 2π |ω|−1)/2 χ(ω)

(2)

(3)

By taking into account that, for the properties of the Fourier transform, it is 1 f(aω ± b) = e∓i b ω/a f(ω/a) , a

(4)

there follows that the dilated and translated instances of the scaling function are def n t − k) = 2n/2 φ(2  n ω − k) = 2n/2 2−n e−iω2−n k φ(ω2  −n ) φnk (ω) = 2n/2 φ(2

that is

−n  −n ) φnk (ω) = 2−n/2 e−iω2 k φ(ω2

(5)

so that the Meyer scaling function at any scale and frequency localization is [4] ⎧ 2n+1 π ⎪ ⎪ ⎪ 1 , |ω| < ⎪ ⎪ 3 ⎨  2n+1 π  π  3 −n 2n+2 π n −n/2 −iω2−n k  e φk (ω) = 2 ≤ |ω| ≤ ν |2 ω| − 1 , cos 2 2π ⎪ 3 n+2 3 ⎪ ⎪ ⎪ 2 π ⎪ ⎩0 , |ω| > 3 (6) where ν(x) is the interpolating polynomial (Table 1). According to (5)-(3) it is −n φnk (ω) = 2−n/2 e−iω2 k χ(ω2−n + 23 π) +

+

3 3 1 iπν ( 2π |ω2−n |−1)/2 + e−iπν ( 2π |ω2−n |−1)/2 χ(ω2−n ) e 2

(7)

The scaling function in the time domain is obtained by finding its inverse Fourier transform  ∞  1 4π/3  def 1 iωt  φ(ω)e dω = φ(ω) cos (ωt)dω (8) φ(t) = 2π −∞ π 0

3

Some Properties of the Characteristic Function

In general for the characteristic function (2) we can set  1 , a < |ω| ≤ b χ(a,b] (ω) = 0 , elsewhere

(9)

On the Differentiable Structure of Meyer Wavelets

for which, assuming a ≤ b , c ≤ d , h > 0 , k > 0, the following properties ⎧ χ(a,b] (hω ± k) = χ( a ∓k, b ∓k] (ω) ⎪ ⎪ h h ⎪ ⎪ ⎪ ⎪ ⎪ χ(a,b] (−hω) = χ(a,b] (hω) ⎪ ⎪ ⎨ = χ(a,b]∪(c,d](ω) + χ(a,b]∩(c,d] (ω) χ(a,b] (ω) + χ(c,d](ω) ⎪ ⎪ ⎪ ⎪ ⎪ = χ(a,b]∩(c,d](ω) χ(a,b] (ω)χ(c,d] (ω) ⎪ ⎪ ⎪ ⎪ ⎩ χ(a+s,b+s] (ω)χ(c+s,d+s] (ω) = χ(a,b] (ω)χ(c,d] (ω) = χ(a,b]∩(c,d](ω)

1007

(10)

hold. According to the previous equations, the characteristic function (9) on any interval (a, b] can be reduced to the basic function (2):   2π ω − (b − 2a) , b>a. χ(a,b] (ω) = χ 3(b − a) Analogously, according to (10)1 and (2), we can generalize the basic function (2) to any interval being 4π (ω) . χ(hω ± k) = χ( 2π 3h ∓k, 3h ∓k]

(11)

The product of the characteristic functions, enjoys some nice properties: Lemma 1. For the product of the characteristic functions we have: 2 2 4π m 2 2π n 4π n (ω) χ(2−m ω + π) χ(2−n ω) = χ( 2π m 3 2 − 3 π, 3 2 − 3 π]∩( 3 2 , 3 2 ] 3 with

(12)



 2π m 2 4π m 2 2π n 4π n 2 − π, 2 − π ∩ 2 , 2 = 3 3 3 3 3 3 ⎧ 1 ⎪ m n−1 ⎪ ∨ 2m > 1 + 2n+1 ⎪ ∅ ⇐⇒ 2 < + 2 ⎪ 2 ⎪ ⎪

⎪ ⎪ ⎪ 4π m 2 2π n 1 1 n−1 ⎪ ⎪ + < 2m < 2n + ⎨ 3 2 − 3 π, 3 2 ⇐⇒ 2 2 2

=  2π 4π 1 ⎪ n n n m n ⎪ ⎪ 2 , 2 ⇐⇒ 2 + < 2 < 2 + 1 ⎪ ⎪ 3 3 2 ⎪ ⎪

 ⎪ ⎪ 4π 2 2π ⎪ ⎪ 2m − π, 2n ⇐⇒ 2n + 1 < 2m < 2n+1 + 1 ⎩ 3 3 3

(13)

being χ∅ (ω) = 0 Analogously it can be easily shown that Lemma 2. For the product of the characteristic functions we have: 2 χ(2m ω + π) χ(2n ω) = χ( 2π −m − 2 π, 4π 2−m − 2 π]∩( 2π 2−n , 4π 2−n ] (ω) 3 2 3 3 3 3 3 3

(14)

1008

C. Cattani and L.M. S´ anchez Ruiz

From the above there follows, in particular ⎧ 2 ⎪ n n−1 ⎪ ω) = 0 ⎨ χ(2 ω + π) χ(2 3 ⎪ 2 ⎪ ⎩ χ(2n−1 ω + π) χ(2n ω) = 0 3

(15)

Another group of values of characteristic functions useful for the following computations are given by the following lemmas Lemma 3. For the following products of the characteristic functions we have: 2π n 4π n (ω) χ(2−m ω) χ(2−n ω) = χ( 2π m 4π m 3 2 , 3 2 ]∩( 3 2 , 3 2 ]

χ(2m ω) χ(2n ω) = χ( 2π −m , 4π 2−m ]∩( 2π 2−n , 4π 2−n ] (ω) 3 2 3 3 3

(16)

Taking into account the previous lemmas and (10)5 we have also Corollary 1. According to lemma 3 it is 2 2 χ(2m ω + π)χ(2n ω + π) = χ(2m ω)χ(2n ω) , 3 3 and, in particular, ⎧ ⎨ χ(2n ω + 2 π) χ(2n−1 ω + 2 π) = 0 3 3 ⎩ n n−1 χ(2 ω) χ(2 ω) =0

4

(17)

First Order Meyer Wavelet

If we take as interpolating function (Table 1) the linear one ν(ω) = ω, we get from (6), ⎧ 2n+1 π ⎪ ⎪ ⎪ 1 , |ω| < ⎪ ⎪ 3 ⎨  3 −n  2n+1 π 2n+2 π n −n/2 −iω2−n k  e φk (ω) = 2 ≤ |ω| ≤ |2 ω| , sin 4 ⎪ 3 n+2 3 ⎪ ⎪ ⎪ 2 π ⎪ ⎩0 , |ω| > 3

(18)

that is

  −n−2  2 n −n/2 −iω2−n k −n −n  e 3 |ω| χ(2 ω) φk (ω) = 2 χ(2 ω + π) + sin 2 3

(19)

From equation (19) it immediately follows that for k = 0 the scaling functions φn0 (ω) are real functions while for k = 0 the functions φnk (ω) have also a nontrivial complex part (see Fig. 1). Moreover, the functions φn0 (ω) are orthonormal functions with respect to the inner product on L2 (R):

On the Differentiable Structure of Meyer Wavelets 00 Ω 1

01 Ω 1

Π Π  2 Π Π  2

1009

Π  Π 2

Ω

Ω

Π  Π 2

1 1 1 Ω

02 Ω 1

1

Π Π  2

Ω

Π  Π 2

Π Π  2

Π  Π 2

Ω

1 1 11 Ω

Π Π  2

12 Ω

Ω

Π  Π 2

Π Π  2

Π  Π 2

Ω

Fig. 1. The Meyer scaling function in frequency domain (plain the real part)

∞ f, g =

def

f (t) g (t)dt

,

(20)

−∞

with f (t) , g (t), in L2 (R) and the bar stands for the complex conjugate. By taking into account the Parseval equality it is 1 f, g = 2π

∞ −∞

1   g f,  f(ω) g (ω)dω = 2π

In the Fourier domain, it is possible to show that

.

(21)

1010

C. Cattani and L.M. S´ anchez Ruiz

Theorem 1. The scaling functions {φ0k (t)} are orthonormal (see e.g. [4]) φ0k (t) , φ0h (t) = δkh . Proof: It is enought to show that, due to (21), 1 0 φ (ω) , φ0h (ω) = δkh 2π k From equation (19) and taking into account the definition (20) of the inner product it is  

  ∞ 3 2 −iωk 0 0   |ω| χ(ω) × e φk (ω) , φh (ω) = χ(ω + π) + sin 3 4 −∞  

 3 2 ×eiωh χ(ω + π) + sin |ω| χ(ω) dω 3 4 Since the compact support of the characteristic functions are disjoint: χ(ω + 2 3 π) χ(ω) = 0, we get 

   ∞ 3 2 2 −iω(k−h) 0 0   |ω| χ(ω) dω e χ(ω + π) + sin φk (ω) , φh (ω) = 3 4 −∞ i.e. taking into account the definition of the characteristic functions  2π/3 φ0k (ω) , φ0h (ω) = e−iω(k−h) dω + −2π/3      −2π/3  4π/3 3 3 −iω(k−h) + |ω| dω + |ω| dω e sin2 e−iω(k−h) sin2 4 4 −4π/3 2π/3 Thus when k = h, φ0k (ω) , φ0k (ω) = =





2π/3

dω + −2π/3



−2π/3

2

sin −4π/3

    4π/3 3 3 2 ω dω + ω dω sin 4 4 2π/3

π π 4 π + + = 2π 3 3 3

when k = h, let say k = h + n, it is  1 φ0k (ω) , φ0h (ω) = sin n



2 nπ 3

 9 + 18 cos

There follows that φ0k (ω) , φ0h (ω) =

2 nπ 3

9 − 4n2 

2π , k = h 0 , k = h

 =0 .

On the Differentiable Structure of Meyer Wavelets

1011

The Meyer wavelet in the Fourier domain is [4]    − 2π) + φ(ω  + 2π) φ(ω/2)   ψ(ω) = −e−iω/2 φ(ω and, according to (4),    φ(ω/2)  ψ(ω) = −e−iω/2 e2πiω + e−2πiω φ(ω) i.e.

 φ(ω/2)   ψ(ω) = −2e−iω/2 cos(2πω)φ(ω)

(22)

From (22) we can easily derive the dilated and translated instances −n (n−1)/2 −n+1 (ω) φ0 ψkn (ω) = −2−n/2+1 e−i2 ω(k+1/2) cos(2−n+1 πω)2n/2 φ−n 0 (ω)2

i.e. −n −n+1 (ω) . ψkn (ω) = −2(n+1)/2 e−i2 ω(k+1/2) cos(2−n+1 πω)φ−n 0 (ω)φ0

From this definition we can easily prove that Meyer wavelets are orthonormal functions.

References 1. D.E. Newland, “Harmonic wavelet analysis”, Proc.R.Soc.Lond. A, 443, (1993) 203–222. 2. C.Cattani, “Harmonic Wavelets towards Solution of Nonlinear PDE”, Computers and Mathematics with Applications, 50 (2005), 1191-1210. 3. H. Mouri and H.Kubotani, “Real-valued harmonic wavelets”, Phys.Lett. A, 201, (1995) 53–60. 4. I. Daubechies, Ten Lectures on wavelets. SIAM, Philadelphia, PA, (1992).

Towards Describing Multi-fractality of Traffic Using Local Hurst Function Ming Li1, S.C. Lim2, Bai-Jiong Hu1, and Huamin Feng3 1

School of Information Science & Technology, East China Normal University, Shanghai 200062, P.R. China [email protected], [email protected] 2 Faculty of Engineering, Multimedia University, 63100 Cyberjaya, Selanger, Malaysia [email protected] 3 Key Laboratory of Security and Secrecy of Information, Beijing Electronic Science and Technology Institute, Beijing 100070, P.R. China [email protected]

Abstract. Long-range dependence and self-similarity are two basic properties of network traffic time series. Fractional Brownian motion (fBm) and its increment process fractional Gaussian noise (fGn) are commonly used to model network traffic with the Hurst index H that determines both the regularity of the sample paths and the long memory property of traffic. However, it appears too restrictive for traffic modeling since it can only model sample paths with the same smoothness for all time parameterized by a constant H. A natural extension of fBm is multifractional Brownian motion (mBm), which is indexed by a time-dependent Hurst index H(t). The main objective of this paper is to model multi-fractality of traffic using H(t), i.e., mBm, on a point-by-point basis instead of an interval-by-interval basis as traditionally done in computer networks. The numerical results for H(t) of real traffic, which are demonstrated in this paper, show that H(t) of traffic is time-dependent, which not only provide an alternative evidence of the multifractal phenomena of traffic but also reveal an challenging issue in traffic modeling: multi-fractality modeling of traffic. Keywords: Network traffic modeling, fractals, multi-fractals, multifractional Brownian motion, local Hurst function.

1 Introduction Experimental observations of long-range dependence (LRD) and self-similarity (SS) of traffic time series in computer communication networks (traffic for short) were actually noted before the eighties of last century [1]. The description of traffic in [1] was further studied and called “packet trains” during the eighties of last century [2]. However, the properties of LRD and SS of traffic were not investigated from a view of self-affine random functions, such as fractional Brownian motion (fBm) or fractional Gaussian noise (fGn), until the last decade, see e.g. [3], [4], [5], [6], [7], and our previous work [8], [9], [10], [11], [12]. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1012–1020, 2007. © Springer-Verlag Berlin Heidelberg 2007

Towards Describing Multi-fractality of Traffic Using Local Hurst Function

1013

Further research of traffic exhibits that traffic has multifractal behavior at small time-scales. However, the multifractal behavior of traffic is conventionally described on an interval-by-interval basis by using H(n), where H(n) is the Hurst parameter in the nth interval for n = 1, 2, ⋅⋅⋅, see e.g. [13], [14], [15], [16], [17], [18], and our recent work [19]. Note that H plays a role in computer networks, see e.g. [20], [21], and our recent papers [19], [22], [23]. Hence, modeling multi-fractality of traffic becomes a contemporary topic in traffic modeling. From a practice view, any models of multi-fractal phenomena of traffic are desired as they may be promising to understand and or find solutions to some difficult issues, such as simulation of the Internet, performance evaluation, network security, and so forth, in networking as can be seen from [15], [19], [34]. [35]. Owing to the fact that a monofractal model utilizes fractional Brownian motion (fBm) with the constant Hurst index H that characterizes the global self-similarity, see e.g. [24], we need studying the possible variation of scaling behavior locally. To do so, fBm can be generalized to multifractional Brownian motion (mBm) by replacing the constant H with a time-dependent Hurst function H (t ), which is also called the local Holder exponent see e.g. [26], [27], and our work [28], [29], [30]. In this paper, we discuss and describe the multi-scale and multi-fractal properties of real traffic based on H(t). We shall exhibit that H (t ) of traffic change erratically with location t. It is noted

that if H (t ) is allowed to be a random function or a random process, then the mBm is a multifractal process. We note that H (t ) differs substantially from H(n) on an interval-by-interval since it can reflect the multifractal behaviors on a point-by-point basis. To the best of our knowledge, modeling multi-scaled and multi-fractal phenomena of real traffic using H (t ) is rarely seen. The rest of paper is organized as follows. We address modelling the multi-fractality of traffic based on the local Hurst function in Section 2. Discussions are given in Section 3, which is followed by conclusions.

2 Multi-fractality of Real Traffic A direct generalization of fBm to multifractional Brownian motion (mBm) can be carried out by replacing the Hurst index with a function H (t ), satisfying H :[0, ∞] → (0, 1). This was first carried out independently by Peltier and Levy-Vehel [27] and Benassi, Jaffard and Roux [31] based on the moving average and harmonizable definitions respectively. Following [24] and [26], we define mBm X (t ) by Eq. (1), where t > 0 and H :[0, ∞] → (a, b) ⊂ (0, 1) is a Holder function of exponent β > 0, and B(t ) is the standard Brownian motion. The variance of BH (t ) is given by Eq. (2), where σ H2 (t ) =

Γ(2 − H (t )) cos(π H (t )) . Since σ is time-dependent, π H (t )(2 H (t ) − 1)

1014

M. Li et al.

2 2 H (t ) it will be desirable to normalize BH (t ) such that E ⎡( X (t ) ) ⎤ = t by replacing ⎣ ⎦ . X (t ) with X (t )

σ H (t )

X (t ) =

1 Γ( H (t ) + 1/ 2)

0

∫ ⎡⎣(t − s)

H (t ) −1/ 2

−∞

− (− s ) H (t ) −1/ 2 ⎤ dB ( s ) ⎦ (1)

t



+ (t − s )

H (t ) −1/ 2

dB( s ).

0

2 2 H (t ) E ⎡( X (t ) ) ⎤ = σ H2 ( t ) t . ⎣ ⎦

(2)

For the subsequent discussion, X (t ) will be used to denote the normalized process. The explicit expression of the covariance of X (t ) can be calculated by E [ X (t1 ) X (t2 ) ] = H ( t ) + H ( t2 ) N ( H (t1 ), H (t2 ) ) ⎡ t1 1 + t2 ⎣

H ( t1 ) + H ( t2 )

− t1 − t2

H ( t1 ) + H ( t2 )

⎤, ⎦

(3)

where ⎛ H (t1 ) + H (t2 ) ⎞ Γ (2 − H (t1 ) − H (t2 )) cos ⎜ π ⎟ 2 ⎝ ⎠. N ( H (t1 ), H (t2 ) ) = ⎛ H (t1 ) + H (t2 ) ⎞ π⎜ ⎟ ( H (t1 ) + H (t2 ) − 1) 2 ⎝ ⎠ With the assumption that H (t ) is β -Holder function such that 0 < inf( H (t )) ≤ sup( H (t )) < (1, β ), one may approximate H (t + ρ u ) ≈ H (t ) as ρ → 0. Therefore, the local covariance function of the normalized mBm has the following limiting form

E [ X (t + τ ) X (t )] ~

(

1 t +τ 2

2 H (t )

+t

2 H (t )

−τ

2 H (t )

) , τ → 0.

(4)

The variance of the increment process becomes

{

E [ X (t + τ ) − X (t ) ]

2

}~ τ

2 H (t )

, τ →0

(5)

implying that the increment processes of mBm is locally stationary. It follows that the local Hausdorff dimension of the graphs of mBm is given by dim{ X (t ), t ∈ [a, b]} = 2 − min{H (t ), t ∈ [a, b]}

(6)

for each interval [a, b] ⊂ R + . Due to the fact that the Hurst index H is time-dependent, mBm fails to satisfy the global self-similarity property and the increment process of mBm does not satisfy the

Towards Describing Multi-fractality of Traffic Using Local Hurst Function

1015

stationary property. Instead, standard mBm now satisfies the local self-similarity. Recall that fBm BH (t ) is a self-similar Gaussian process with BH (at ) and a H BH (t ) having identical finite-dimensional distributions for all a > 0. For a locally self-similar process, therefore, one may hope that the following expression can provide a description for the local self-similarity of X (t ) :

X (at ) ≅ a H ( t ) X (t ), ∀a > 0,

(7)

where ≅ stands for equality in distribution. However, this definition of locally self-similar property would lead to a situation where the law of X ( s ) depends on

H (t ) when s is far away from t : X ( s ) ≅ ( s / t ) H ( t ) X (t ). A more satisfactory way of characterizing this property is the locally asymptotical self-similarity introduced by Benassi, Jaffard and Roux [31]. A process X (t ) indexed by the Holder exponent H (t ) ∈ C β such that H (t ) :[0, ∞] → (0, 1) for t ∈ R and β > sup( H (t )) is said to be locally asymptotically self-similar (lass) at point t0 if ⎛ X (t 0 + ρ u ) − X (t 0 ) ⎞ lim ⎜ ⎟ ≅ BH ( t0 ) (u ) ρ H ( t0 ) ⎝ ⎠u∈R

(

ρ → 0+

)

u∈R

,

(8)

where the equality in law is up to a multiplicative deterministic function of time and BH (t0 ) is the fBm indexed by H (t0 ). It can be shown that mBm satisfies such a locally self-similar property. In passing, the property described by (8) is also analyzed in our recent work [32] from a view of the Cauchy class. Based on the local growth of the increment process, one may write a sequence

Sk ( j ) =

m j+k ∑ X (i + 1) − X (i) , 1 < k < N , N − 1 j =0

(9)

where m is the largest integer not exceeding N / k . The local Hurst function H (t ) at point t = j /( N − 1) is then given by H (t ) = −

log( π / 2 S k ( j )) . log( N − 1)

(10)

The function H (t ) in (10) can serve as a numerical model of multi-fractality of traffic. Now, we select 4 widely-used real-traffic traces in computer networks. They are DEC-PKT-n.tcp (n = 1, 2, 3, 4) [33]. Fig. n (a) shows their time series, where X (i ) implies the number of bytes in the ith packet (i = 0, 1, 2, …). Fig. n (b) illustrates the corresponding local Hurst function. Recall that the mBm is a locally self-similar process. For H (t ) which is a continuous deterministic function, the resulting mBm is a multi-scale process. On the other hand, if H (t ) is a random function or a random process, then the mBm is a

1016

M. Li et al.

0.8 H(t)

x(i), Bytes

1000

500

0

0.75

0.7 0

256

512

768

1024

0

2048

4096

i

6144

8192

t

(a)

(b)

Fig. 1. (a) Traffic time serie s X(i) of DEC-PKT-1.tcp. (b) Local Hurst function of X(i)

0.8 H(t)

x(i), Bytes

2000

1000

0

0.75

0.7 0

256

512

768

1024

0

2048

4096

i

6144

8192

t

(a)

(b)

Fig. 2. (a) Traffic time series X(i) of DEC-PKT-2.tcp. (b) Local Hurst function of X(i)

0.8 H(t)

x(i), Bytes

1000

500

0

0.78

0.76 0

256

512

768

1024

0

2048

4096

i

6144

8192

t

(a)

(b)

Fig. 3. (a) Traffic time series X(i) of DEC-PKT-3.tcp. (b) Local Hurst function of X(i)

0.85 H(t)

x(i), Bytes

1000

500

0

0.8

0.75 0

256

512

768

1024

0

2048

4096

i

(a)

6144

8192

t

(b)

Fig. 4. (a) Traffic time series X(i) of DEC-PKT-4.tcp. (b) Local Hurst function of X(i)

Towards Describing Multi-fractality of Traffic Using Local Hurst Function

1017

Multifractal process. From the above figures, it is obviously seen that H (t ) appears random. Thus, H (t )s illustrated in Fig. 1 (b) ~ Fig. 4 (b) are numerical models of multi-fractality of real traffic DEC-PKT-n.tcp (n = 1, 2, 3, 4).

3 Discussions The previous figures provide verifications that traffic has the nature of multi-fractal. Recall that the Hurst parameter characterizes the burstness of process from a view of networking. From the above figures, we see that the local Hurst function of a real-traffic series is time varying. Thus, a traffic series may have different burstness at different points of time. As known, if the space of a buffer is occupied, some packets will be dropped and they will be retransmitted late when traffic load becomes light but these packets suffer from delay. The above H (t )s provide evidences that, during a communication process, why packets are delayed due to traffic congestion and why delay is usually a random variable. Further, the local Hurst functions in Figs. 1 (b) ~ 4 (b) verify that traffic has strictly alternating ON/OFF-period behavior, where the term “strictly alternating ON- and OFF- periods” implies 1) the length of ON-periods are independent and identically distributed (i.i.d.) and so are the length of OFF-periods, and 2) an OFF-period always follows an ON-period [16]. The assumption that the Hurst function is continuous implies H (t + τ ) ≈ H (t ) for τ → 0. Therefore, the normalized mBm has the covariance in the following limiting form:

(

E[ X (t + τ ) X (t )] = 0.5 t

2 H (t )

+ t +τ

2 H (t )

−τ

2 H (t )

) , τ → 0.

(11)

The variance of the increment process is given by 2 E ⎡ X (t + τ ) X (t ) ⎤ = τ ⎣ ⎦

2 H (t )

, τ → 0.

(12)

Therefore, one sees that the increment process of mBm is locally stationary. In practice, a process is causal. Hence, we consider X + (t ) which stands for a causal process with the starting point t = 0. In this case, one has the one-sided mBm based on fractional integral of Riemann-Liouville type as follows [30]: t

X + (t ) =

1 (t − u ) H ( t ) − 0.5 dB (u ). Γ( H (t ) + 1/ 2) ∫0

(13)

For t1 < t2 , we have the covariance E [ X + (t1 ) X + (t2 ) ] = H ( t1 ) + 0.5 H ( t2 ) − 0.5 1 2

2t t × 2 F1 (0.5 − H (t2 ), 1, H (t1 ) + 1.5, t1 / t2 ). (2 H (t1 ) + 1)Γ( H (t1 ) + 0.5)Γ( H (t2 ) + 0.5)

(14)

1018

M. Li et al.

The variance of X + (t ) has the similar form as that of X (t ), i.e., ∼ t

2 H (t )

up to a

deterministic (or random) function of H (t ). Though the previous discussed H (t ) appears time dependent as can be seen from Figs. 1-4, its analytic model remains unknown. Clearly, the multi-fractality of traffic may be quantitatively modelled if analytic models, either deterministic or statistic, of H (t ) achieve. Either is greatly desired in practical applications such as pattern recognition of traffic as can be seen from [12], [19]. Finding analytic models of H (t ) is our further aim that is certainly attractive.

4 Conclusions We have given and demonstrated the multi-fractality of real traffic based on the local Hurst functions. The present results exhibit that the local Hurst functions of investigated real traffic show that H(t) of traffic is time-varying. The significance of the present results is not only to show multifractal phenomena of traffic on a point-by-point basis instead of an interval-by-interval basis as conventionally done in computer networks but, more importantly, to make the research of the multi-fractality of traffic a step further towards modeling multi-fractality of traffic.

Acknowledgements This work was supported in part by the National Natural Science Foundation of China under the project grant numbers 60573125 and 60672114, by the Key Laboratory of Security and Secrecy of Information, Beijing Electronic Science and Technology Institute under the project number KYKF 200606 of the open founding. SC Lim would like to thank the Malaysia Ministry of Science, Technology and Innovation for the IRPA Grant 09-99-01-0095 EA093, and Academy of Sciences of Malaysia for the Scientific Advancement Fund Allocation (SAGA) P 96c.

References 1. Tobagi, F.A.; Gerla, M., Peebles, R.W., Manning, E.G.: Modeling and Measurement Techniques in Packet Communication Networks. Proc. the IEEE 66 (1978) 1423-1447 2. Jain, R., Routhier, S.: Packet Trains-Measurements and a New Model for Computer Network Traffic. IEEE Journal on Selected Areas in Communications 4 (1986) 986-995 3. Csabai, I.: 1/f Noise in Computer Network Traffic. J. Phys. A: Math. Gen. 27 (1994) L417-L421 4. Paxson V., Floyd, S.: Wide Area Traffic: The Failure of Poison Modeling. IEEE/ACM T. Networking 3 (1995) 226-244 5. Beran, J., Shernan, R., Taqqu, M. S., Willinger, W.: Long-Range Dependence in Variable Bit-Rate Video Traffic. IEEE T. Communications 43 (1995) 1566-1579 6. Crovella, E., Bestavros, A.: Self-Similarity in World Wide Web Traffic: Evidence and Possible Causes. IEEE/ACM T. Networking 5 (1997) 835-846

Towards Describing Multi-fractality of Traffic Using Local Hurst Function

1019

7. Tsybakov, B., Georganas, N. D.: Self-Similar Processes in Communications Networks. IEEE T. Information Theory 44 (1998) 1713-1725 8. Li, M., Jia, W., Zhao, W.: Correlation Form of Timestamp Increment Sequences of Self-Similar Traffic on Ethernet. Electronics Letters 36 (2000) 1168-1169 9. Li, M., Jia, W., Zhao, W.: Simulation of Long-Range Dependent Traffic and a TCP Traffic Simulator. Journal of Interconnection Networks 2 (2001) 305-315 10. Li, M., Chi, C.-H.: A Correlation-Based Computational Method for Simulating Long-Range Dependent Data. J. Franklin Institute 340 (2003) 503-514 11. Li, M., Zhao, W., Jia, W., Long, D.-Y., Chi, C.-H.: Modeling Autocorrelation Functions of Self-Similar Teletraffic in Communication Networks Based on Optimal Approximation in Hilbert Space. Applied Mathematical Modelling 27 (2003) 155-168 12. Li, M.: An Approach to Reliably Identifying Signs of DDOS Flood Attacks Based on LRD Traffic Pattern Recognition. Computer & Security 23 (2004) 549-558 13. Cappe, O., Moulines, E., Pesquet, J.-C., Petropulu, A., Yang X.: Long-Range Dependence and Heavy Tail Modeling for Teletraffic Data. IEEE Sig. Proc. Magazine 19 (2002) 14-27 14. Feldmann, A., Gilbert, A. C., Willinger, W., Kurtz, T. G.: The Changing Nature of Network Traffic: Scaling Phenomena. Computer Communications Review 28 (1998) 5-29 15. Willinger, W., Paxson, V.: Where Mathematics Meets the Internet. Notices of the American Mathematical Society 45 (1998) 961-970 16. Willinger, W., Paxson, V., Riedi, R. H., Taqqu, M. S.: Long-Range Dependence and Data Network Traffic, Long-Range Dependence: Theory and Applications. P. Doukhan, G. Oppenheim, and M. S. Taqqu, eds., Birkhauser (2002) 17. Abry, P., Baraniuk, R., Flandrin, P., Riedi, R., Veitch, D.:, Multiscale Nature of Network Traffic. IEEE Signal Processing Magazine 19 (2002) 28-46 18. Nogueira, A., Salvador, P., Valadas, R.: Telecommunication Systems 24 (2003) 339–362 19. Li, M.: Change Trend of Averaged Hurst Parameter of Traffic under DDOS Flood Attacks. Computers & Security 25 (2006) 213-220 20. Tsybakov, B., Georganas, N. D.: On Self-Similar Traffic in ATM Queues: Definitions, Overflow Probability Bound, and Cell Delay Distribution. IEEE/ACM T. Networking 5 (1997) 397-409 21. Kim, S., Nam, S. Y., Sung, D. K.: Effective Bandwidth for a Single Server Queueing System with Fractional Brownian Input. Performance Evaluation 61 (2005) 203-223 22. Li, M., Lim, S. C.: Modeling Network Traffic Using Cauchy Correlation Model with Long-Range Dependence. Modern Physics Letters B 19 (2005) 829-840 23. Li, M.: Modeling Autocorrelation Functions of Long-Range Dependent Teletraffic Series Based on Optimal Approximation in Hilbert Space-a Further Study. Applied Mathematical Modelling 31 (2007) 625-631 24. Mandelbrot, B. B.: Gaussian Self-Affinity and Fractals. Springer (2001) 25. Levy-Vehel, J., Lutton, E., Tricot C. (Eds).: Fractals in Engineering. Springer (1997) 26. Peltier, R. F., Levy-Vehel, J.: A New Method for Estimating the Parameter of Fractional Brownian Motion. INRIA TR 2696 (1994) 27. Peltier, R. F., Levy-Vehel, J.: Multifractional Brownian Motion: Definition and Preliminaries Results. INRIA TR 2645 (1995) 28. Muniandy, S. V., Lim, S. C.: On Some Possible Generalizations of Fractional Brownian Motion. Physics Letters A226 (2000) 140-145 29. Muniandy, S. V., Lim, S. C., Murugan, R.: Inhomogeneous Scaling Behaviors in Malaysia Foreign Currency Exchange Rates. Physica A301 (2001) 407-428

1020

M. Li et al.

30. Muniandy, S. V., Lim, S. C.: Modelling of Locally Self-Similar Processes Using Multifractional Brownian Motion of Riemann-Liouville Type. Phys. Rev. E 63 (2001) 046104 31. Benassi, A., Jaffard, S., Roux, D.: Elliptic Gaussian Random Processes. Revista Mathematica Iberoamericana 13 (1997) 19-90 32. Lim S. C., Li, M.: Generalized Cauchy Process and Its Application to Relaxation Phenomena. Journal of Physics A: Mathematical and General 39 (2006) 2935-2951 33. http://www.acm.org/sigcomm/ITA/ 34. Floyd, S., Paxson, V.: Difficulties in Simulating the Internet. IEEE/ACM T. Networking 9 (2001) 392-403 35. Willinger, W., Govindan, R., Jamin, S., Paxson V., Shenker, S.: Scaling Phenomena in the Internet: Critically Examining Criticality. Proc. Natl. Acad. Sci. USA 99 (Suppl. 1) (2002) 2573-2580

A Further Characterization on the Sampling Theorem for Wavelet Subspaces Xiuzhen Li1 and Deyun Yang2 Department of Radiology, Taishan Medical University, Taian 271000, China [email protected] Department of Information Science and Technology, Taishan University, Taian 271000, China [email protected]

1

2

Abstract. Sampling theory is one of the most powerful results in signal analysis. The objective of sampling is to reconstruct a signal from its samples. Walter extended the Shannon sampling theorem to wavelet subspaces. In this paper we give a further characterization on some shiftinvariant subspaces, especially the closed subspaces on which the sampling theorem holds. For some shift-invariant subspaces with sampling property, the sampling functions are explicitly given. Keywords: Sampling theorem, Wavelet subspace, Wavelet frame, Shift-invariant subspace.

1

Introduction and Main Results

Sampling theory is one of the most powerful results in signal analysis. The objective of sampling is to reconstruct a signal from its samples. For example, the classical Shannon theorem says that for each 1 1 f ∈ B 12 = {f ∈ L2 (R) : suppf ⊂ [− , ]}, 2 2 ∞

π(x−k) f (k) sinπ(x−k) , where the convergence is both in L2 (R) and uni∞ form on R, and the Fourier transform is defined by f(ω) = −∞ f (x)e−2πixω dx. If ψ(x) = sinπxπx , then {ψ(· − k)}k∈Z is an orthonormal basis for B 12 and B 12 is a shift-invariant subspace in L2 (R). The structure of finitely generated shift-invariant subspaces in L2 (R) is studied, e.g., in [1]-[6]. There are fruitful results in wavelet theory in the past 20 years (see [7]-[12]). In [2], Janssen considered the shifted sampling and corresponding aliasing error by means of Zak transform. Walter [1] extended the Shannon sampling theorem to wavelet subspaces. Zhou and Sun [13] characterized the general shifted wavelet subspaces on which the sampling theorem holds:

f (x) =



k=−∞

This work was supported by the National Natural Science Foundation of China (Grant No. 60572113).

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1021–1028, 2007. c Springer-Verlag Berlin Heidelberg 2007 

1022

X. Li and D. Yang

Proposition 1 ([13]). Let V0 be a closed subspace in L2 (R) and {φ(· − n)}n∈Z is a frame for V0 . Then the following two assertions are equivalent:  (i) k ck φ(x − k) converges pointwise to a continuous function for any {ck } ∈ l2 and there exists a frame {S(· − n)}n∈Z for V0 such that  f (x) = f (k)S(x − k) for any f ∈ V0 , k

where the convergence is both in L2 (R) and uniform on R.  (ii) φ ∈ C(R), supx∈R k |φ(x − k)|2 < ∞ and there exist two positive constants A, B such that      −ikω  φ(k)e AχEφ (ω) ≤   ≤ BχEφ (ω), a.e.,   k

2     + 2kπ) > 0}. where χ is the characteristic function, Eφ = {ω ∈ R : k φ(ω   − n) for any f ∈ V0 , Moreover, it implies that for S in (i), f (k) = f, S(· where S is defined in the following Proposition 3. In this paper we characterize some shift-invariant subspaces, especially the closed subspaces in L2 (R) such that the sampling theorem holds. Notations. Firstly, we discuss functions in L2 (R). Therefore f = g means that f (ω) = g(ω) for almost everywhere ω ∈ R. C(R) is the space of continuous 2     function. Gf (ω) = f (ω + k)   . It is easy to see that Gf is defined only k a.e.. Ef = {ω ∈ R : Gf (ω) > 0} for any f ∈ L2 (R). χE is the characteris f (n)e−i2πnω for any f ∈ L2 (R) with tic function of the set E. f ∗ (ω) =  n 2 0 n |f (n)| < ∞. Vf = {g : g(·) = n cn f (· − n), where the convergence is in L2 (R) and {cn }n∈Z ∈ l2 }. Vf = {f (· − n)}n which means that any g ∈ Vf can be approximated arbitrarily well in norm by a finite linear combinations of vectors f (· − n). Moreover, Vf is called a shift-invariant subspace generated by a.e f . Let μ(·) be the Lebesgue measure on R. For E, F ⊂ R, E = F means that μ(E \ F ) = μ(F \ E) = 0. For φ ∈ L2 (R), if {φ(· − n)}n is a frame (Riesz basis) for Vφ then φ is called a frame (Riesz) function. Definition 1. A closed subspace V in L2 (R) is called a sampling space, if there exists a frame {ψ(·−k)}k∈Z for V such that k ck ψ(x−k) converges pointwisely to a continuous function for any {ck } ∈ l2 and  f (k)ψ(x − k) for any f ∈ V, f (x) = k

where the convergence is both in L2 (R) and uniform on R. In this case, ψ is called a sampling function on V .

A Further Characterization on the Sampling Theorem for Wavelet Subspaces

1023

From the definition, we know that if V is a sampling space then for any f ∈ V there exists a function g ∈ C(R) such that f (x) = g(x), a.e. x ∈ R. Therefore, in what follows we assume that all the functions in a sampling space are continuous. Next we denote the following three sets to characterize the functions with sampling property. 2  P1 is the subset in L (R) ∩ C(R) in which each function φ satisfies that 2 k ck φ(x − k) converges pointwisely to a continuous function for any {ck } ∈ l . P2 is the set of functions in which each function f satisfies the following three conditions:  (i) f = 0, f ∈ L2 (R) ∩ C(R), k |f (k)|2 < ∞ and f is a bounded function; (ii) μ({ω : f∗ (ω) = 0, Gf (ω) = 0}) = 0; (iii) There exist two positive constants A, B such that Gf (ω) A≤   ≤ B, a.e. ω ∈ Ef .  ∗ 2 f (ω) P3 = {φ ∈ P2 : supx∈R

 n

2

|φ(x − n)| < ∞}.

For any f ∈ P2 , it is easy to see that μ(Ef ) > 0 and f∗ (ω), Gf (ω) are well defined almost everywhere. Next we list the following propositions to be used in the proof of our results. Proposition 2 ([3],[5]). Let φ ∈ L2 (R). φ is a frame function if and only if there are constants A, B > 0 such that AχEφ ≤ Gφ ≤ BχEφ , a.e. Especially, if φ is a Riesz function, then Eφ = R. Proposition 3 ([3],[13]). Assume that f ∈ L2 (R) and f is a frame function. Let f (ω)  , if ω ∈ Ef f (ω) = Gf (ω) 0, if ω ∈ / Ef then {f(· − n)}n is a dual frame of {f (· − n)}n in Vf . Proposition 4 ([13]). Let φ ∈ L2 (R). Then φ ∈ P if and only if the following holds (i) φ ∈ C(R),  2 (ii) k∈Z |φ(x − k)| ≤ M for some constant M > 0. Proposition 5 ([5]). Vf = {g ∈ L2 (R) :  g = τ f, where τ is a function with period 1 and τ f ∈ L2 (R)}. g. Proposition 6 ([5]). Let g ∈ Vf . Then Vg = Vf if and only if suppf = supp a.e

Now we give the following theorem to check whether Vf is the maximum shiftinvariant subspace in L2 (R).

1024

X. Li and D. Yang

Theorem 7. Let f ∈ L2 (R). There exists g ∈ L2 (R) such that f ∈ Vg and  g∈ / Vf if and only if there exists E ⊆ [0, 1] with μ(E) > 0 such that f (ω) = 0 for any ω ∈ k∈Z (k + E). For a shift-invariant subspace Vf with sampling property, how can we find its sampling functions? For this, we give the following two theorems. Firstly, for any f ∈ L2 (R) if f ∗ is well defined, then we define f (ω) , if f∗ (ω) = 0,  (1) fp (ω) = f ∗ (ω) 0, if f∗ (ω) = 0. Theorem 8. If f ∈ P2 , then {fp (· − n)}n is a frame for Vf . Theorem 9. (i) Let f∗ (ω) = 0, a.e. ω ∈ R. If S ∈ L2 (R) such that  f (x) = f (k)S(x − k), k

where the convergence is in L2 (R), then S = fp . (ii) For any f ∈ P3 , let f (ω) , if f∗ (ω) = 0,  S(ω) = f ∗ (ω) R(ω), if f∗ (ω) = 0,  where R ∈ L2 (R). Then for any g ∈ Vf0 , g(x) = n g(n)S(x − n). Finally, we give the following equivalent characterization on shift-invariant subspaces with sampling property. Theorem 10. Assume that V is a shift-invariant subspace. Then the following assertions are equivalent: (i) V is a sampling space. (ii) For any φ ∈ C(R) such that {φ(· − n)}n is a frame for V , we have  2 |φ(x − n)| < ∞, sup x

(2)

n

and there exist positive constants A, B such that     AχEφ (ω) ≤ φ∗ (ω) ≤ BχEφ (ω), a.e.

(3)

(iii) There exists φ ∈ C(R) such that {φ(· − n)}n is a frame for V and (2), (3) hold. (iv) There exists φ ∈ C(R) such that {φ(· − n)}n is a frame for V , (2) holds and  |g(n)|2 ≤ B g 2 , ∀g ∈ V. (4) A g 2 ≤ n

for some positive constants A, B. (v) There exists φ ∈ C(R) such that {φ(· − n)}n is a frame for V , (2) holds   − ·)}k is a frame for V . and { l φ(k − l)φ(k

A Further Characterization on the Sampling Theorem for Wavelet Subspaces

2

1025

Proof of Main Results

Proof of Theorem 7. [necessary] Assume that g ∈ L2 (R), f ∈ Vg and g ∈ / Vf . By Proposition 5 and Proposition 6, f = τ g where τ is a a function with period 1  > 0. Then there exists E ∗ ∈ R with μ(E ∗ ) > 0 such that and μ(supp g \ suppf) g(ω) = 0, f(ω) = 0 for any ω ∈ E ∗ . Thus τ (ω) = 0 for any ω ∈ E ∗ . Let T (E ∗ ) = {ω : ω ∈ [0, 1], there exists k ∈ Z such that ω + k ∈ E ∗ }. Then τ (ω) = 0 for any ω ∈ T (E ∗ ). Since E ∗ ⊆ k∈Z (k + T (E ∗ )) and μ(E ∗ ) > 0, we have μ(T (E ∗ )) > 0. Therefore, if we take E = T (E ∗ ), then E satisfies the conditions in the theorem. [suf f iciency] Assume that f ∈ L2 (R) and there exists E ⊆ [0, 1] with μ(E) > 0 such that f(ω) = 0 for any ω ∈ k∈Z k + E. Let

g(ω) =

f(ω), if ω ∈ / E, 1, if ω ∈ E.

Then f(ω) = τ (ω) g (ω) for any ω ∈ R, where

1, if ω ∈ / k∈Z (k + E), τ (ω) = 0, if ω ∈ k∈Z (k + E)  > 0. Thus we have f ∈ Vg is a function with period 1 and μ(supp g \ suppf) and g ∈ / Vf . This completes the proof of Theorem 7. Proof of Theorem 8. By (1) and f ∈ P2 , we have fp (ω) = τ (ω)f(ω), where 1 , if ω ∈ Ef τ (ω) = f ∗ (ω) 0, if ω ∈ / Ef .

Then Gfp (ω) =

Gf (ω)

|f ∗ (ω)| 0,

2

, if ω ∈ Ef if ω ∈ / Ef .

Since f ∈ P2 , there exist positive constants A, B such that AχEf ≤ Gfp ≤ BχEf . However, τ is a function with period 1. By Proposition 5, fp ∈ Vf . It is from suppf = suppfp and Proposition 6 that Vf = Vfp . Finally, by Proposition 2, fp is a frame for Vf . This completes the proof of Theorem 8.  Proof of Theorem 9. [(i)] Since f (x) = k f (k)S(x − k) and f∗ (ω) = 0, we   have f(ω) = f∗ (ω)S(ω), S(ω) = ff∗(ω) . Thus S = fp . (ω)  0 [(ii)] For any g ∈ Vf , there exist {ck } ∈ l2 such that g(·) = k ck f (· − k),  where the convergence is pointwise and g ∈ C(R). Let C(ω) = k ck e−i2πkω ,

1026

X. Li and D. Yang

C ∗ (ω) = C(ω)χEf (ω). Then there exists {c∗k } ∈ l2 such that C ∗ (ω) =  ∗ −i2πkω . It follows from k ck e g(ω) = C(ω)f(ω) = C ∗ (ω)f(ω)   and Proposition 4 that g(x) = k c∗k f (x − k) converges pointwisely in R. By Theorem 8, fp is a frame for Vf . Then using Proposition 3, fp is a dual frame of {f (· − n)}n∈Z in Vf . Thus (g, fp (· − n)) =



C ∗ (ω)

Ef

=



 1/2 f∗ (ω)   2 i2πnω f (ω) e dω = C ∗ (ω)f∗ (ω)ei2πnω dω   Gf (ω) −1/2

c∗k f (n − k) = g(n).

k



Thus we get g(x) =

n

g(n)fp (x − n). Let  h(ω) = R(ω)χ{ω:f ∗ (ω)=0} . Then

 S(ω) = fp (ω) +  h(ω), S = fp + h. Since g(n) = Then



− k) for n ∈ Z, we have g∗ (ω) =  h(ω) = 0, g(n)h(x − n) = 0. g∗ (ω)

∗ k ck f (n



∗ −i2πkω

f ∗ (ω). k ck e

n

Therefore g(x) =



g(n)(fp (x − n) + h(x − n)) =

n



g(n)S(x − n).

n

This completes the proof of Theorem 9. For the proof of Theorem 10, we give the following Lemma. Lemma 1. Assume that φ ∈ P1 is a frame function. If f ∈ Vφ satifies f(ω) =  b(ω)φ(ω), where b(ω) is a function with period 1 and bounded on Eφ , then f ∈ P1 . Specially, for any frame function ψ ∈ Vφ , we have ψ ∈ P1 .  Proof. Assume that f ∈ Vφ satisfies f(ω) = b(ω)φ(ω), where b(ω) is a function Since B(ω) is with period 1 and bounded on Eφ . Let B(ω) = b(ω)χEφ (ω). bounded on [− 21 , 12 ], there exists {Bn } ∈ l2 such that B(ω) = n Bn e−i2πnω . Since φ ∈ P1 and   f(ω) = b(ω)φ(ω) = B(ω)φ(ω),  we have f ∈ C(R) and f (x) = n Bn φ(x − n), where the convergence is both in L2 (R) and pointwisely.  2 Now using φ ∈ P1 and Proposition 4, we have supx∈R n |φ(x − n)| < ∞. Hence  2      2 |f (x − k)| = sup Bn φ(x − k − n) sup    x x n k

k

A Further Characterization on the Sampling Theorem for Wavelet Subspaces

1027

 2     2 = sup |B(ω)|  φ(x − k)ei2πkω  dω   x −1/2 k  2 2 |φ(x − k)| < ∞. ≤ sup B(ω) ∞ 

x

1/2

(5)

k

Then by f ∈ C(R), (5) and Proposition 4, we get f ∈ P1 . For any frame function ψ ∈ Vφ , there exists a function τ with period 1 such   that ψ(ω) = τ (ω)φ(ω), then Gψ = |τ (ω)|2 Gφ (ω). By Proposition 2, τ is bounded on Eφ . Thus ψ ∈ P1 . This completes the proof of Lemma 1. Proof of Theorem of 10. (i) ⇒ (ii) Given any continuous function φ such that {φ(· − n)}n is a frame for sampling space V . Let ψ be a sampling function  for V . By Proposition 4, supx∈R n |ψ(x − n)|2 < ∞.    Let φ(ω) = b(ω)ψ(ω), where b(ω) = k bk e−i2πkω for some {ck } ∈ l2 . Then 2 Gφ (ω) = |b(ω)| Gψ (ω). By Proposition 2, b(ω) is bounded on Eψ . By Lemma 1, φ ∈ P1 . Then by Proposition 1, we get (ii). (ii) ⇒ (iii) It is trivial. (iii) ⇒  (i) By Proposition 1, there exists a frame {ψ(· − n)}n for V such that f (x) = n f (n)ψ(x− n) for any f ∈ V . By Lemma 1 and Proposition 4, ψ ∈ P1 . Thus we get (i). (iv) ⇒ (i) For any n ∈ Z, T g = g(n) is a bounded linear functional on V . Then there exists Sn ∈ V such that g(n) = g, Sn for any n ∈ Z, g ∈ V. Let S := S0 and g1 (x) = g(x + 1), for any g ∈ V, x ∈ R. Then g1 , S = g1 (0) = g(1) = g, S1 . Thus for any g ∈ V , we have    g(x)S(x − 1)dx = g(x + 1)S(x)dx = g1 (x)S(x)dx = g1 (0) = g(1)  = g, S1 = g(x)S1 (x)dx. Thus S1 (x) = S(x−1). Similarly, we have Sn (x) = S(x−n) for any n ∈ Z, x ∈ R. Therefore g(n) = g, S(· − n) for any g ∈ V. Now by (4), S is a frame for V . Let S be defined as Proposition 3. Then by Lemma 1 and Proposition 4, S is the sampling function for V . Therefore we get (i).   − n) for any f ∈ V0 , (i) ⇐⇒ (iv) Note that in Proposition 1, f (n) = f, S(·  − n)}n is a frame for V0 . Thus it is just from Proposition 1. where {S(·

1028

X. Li and D. Yang

(iv) ⇐⇒ (v) Assume that there exists a continuous function φ which is a  2 frame function for V such that supx n |φ(x − n)| < ∞. Then by Lemma 1, we   − l) convergences both in L2 (R) have φ ∈ P1 . Therefore qx (·) := l φ(x − l)φ(· and uniformly on R. Now for any g ∈ V ,   − k) φ(x − k) for any x ∈ R. g, φ(· (6) g, qx = k

The last series converges both in L2 (R) to g and uniformly to g, qx . Thus g(x) = g, qx for any x ∈ R.

(7)

If (iv) holds, then there exist two positive constants A, B such that  A g 2 ≤ | g, qk |2 ≤ B g 2 for any g ∈ V. k



 − ·)}k is a frame for V . We get (v). Thus {qk (·)}k = { l φ(k − l)φ(k If (v) holds, then {qk (·)}k is a frame for V . By (6) and (7), we get (iv). This completes the proof of Theorem 10.

References 1. Walter,G.: A sampling theorem for wavelet subspaces. IEEE Trans. Inform. Theory 38 (1992) 881–884 2. Janssen, A.J.E.M.: The Zak transform and sampling theorems for wavelet subspaces. J. Fourier Anal. Appl. 2 (1993) 315–327 3. Boor, C., Devore, R., Ron, A.: The structure of finitely generated shift-invariant subspaces in L2 (R). J. Func. Anal. 119 (1994) 37–78 4. Boor, C., Devore, R., Ron, A.: Approximation from shift-invariant subspaces of L2 (Rd ). Trans. Amer. Math, Soc. 341 (1994) 787–806 5. Benedetto, J.J., Walnut, D.F.: Gabor frames for L2 and related spaces. In: Benedetto, J.J., Frazier, M. W. (eds.): Wavelets: Mathematics and Applications. CRC Press, Boca Raton (1993) 1–36 6. Ron, A., Shen, Z.W.: Frames and stable bases for shift-invariant subspaces of L2 (Rd ). Can. J. Math. Vol. 5 (1995) 1051-1094 7. Daubechies, I.: Ten lectures on wavelets, Philadalphia, SIAM (1992) 8. Christensen, O: An Introduction to Frames and Riesz Bases. Birkh¨ auser, Boston (2003) 9. Chui, C., Shi, X.: Orthonormal wavelets and tight frames with arbitrary real dilations, Appl. Comp. Harmonic Anal. 9 (2000) 243–264 10. Yang, D., Zhou, X.: Irregular wavelet frames on L2 (Rn ), Science in China Ser. A. Math. 48 (2005) 277–287 11. Yang, D., Zhou, X.: Wavelet frames with irregular matrix dilations and their stability, J. Math. Anal. Appl. 295 (2004) 97–106 12. Yang, D., Zhou, X.: Frame wavelets with matrix dilations in L2 (Rn ), Appl. Math. Letters 17 (2004) 631–639 13. Zhou, X.W., Sun, W.C: On the sampling theorem for wavelet subspace. J. Fourier Anal. Appl. 1 (1999) 347–354

Characterization on Irregular Tight Wavelet Frames with Matrix Dilations Deyun Yang1,2 , Zhengliang Huan1 , Zhanjie Song3 , and Hongxiang Yang1 1

2

Department of Information Science and Technology, Taishan University, Taian 271000, China [email protected] School of Control Science and Engineering, Shandong University, Jinan 250061, China 3 School of Science, Tianjin University, Tianjin 300072, China

Abstract. There are many results in one dimensional wavelet frame theory in recent years. However, since there are some essential differences in high dimensional cases, the classical methods for one dimensional regular wavelet frames are unsuitable for the cases. In this paper, under some conditions on matrix-dilated sequences, a characterization formula for irregular tight frames of matrix-dilated wavelets is developed. It is based on the regular formulation by Chui, Czaja, Maggioni, Weiss, and on the recent multivariate results by Yang and Zhou. Keywords: Irregular frame, Tight wavelet frame, Matrix dilations, Bessel sequences.

1

Characterization on Irregular Tight Wavelet Frames with Matrix-Dilations

Assume ψ, ψ˜ ∈ L2 (Rn ), {Aj }j∈Z is a real n × n matrix sequence, B is a real n × n nonsingular matrix. A−∗ is the transpose of A−1 . Let Ψ = {ψj,k : j ∈ Z, k ∈ Zn }, Ψ˜ = {ψ˜j,k : j ∈ Z, k ∈ Zn },

(1)

−1

n where ψj,k (x) = |det Aj | 2 ψ(A−∗ j x − Bk). If {ψj,k : j ∈ Z, k ∈ Z } is a tight frame for L2 (Rn ), then ψ is called a tight frame function with respect to dilation sequence {Aj }j∈Z . There are some results in high dimensional wavelet  frame   ˜ f, ψ  ψ , g , theory in recent years (see [1] -[7] ). Let P (f, g) = n j,k j,k j∈Z,k∈Z

f, g ∈ L2 (Rn ). Now we consider the following subset of L2 (Rn ): B = {f ∈ L2 (Rn ) : f ∈ L∞ (Rn ), f has compact support.} The following lemma comes from [2]. 

This work was supported by the National Natural Science Foundation of China (Grant No. 60572113).

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1029–1036, 2007. c Springer-Verlag Berlin Heidelberg 2007 

1030

D. Yang et al.

Lemma 1. Two families {eα : α ∈ A} and {˜ eα : α ∈ A} constitute a dual pair if and only if they are Bessel sequences and satisfy  P (f, g) := f, eα  ˜ eα , g = f, g , α∈A

for all f ,g in a dense subset of H. Given {Aj }j∈Z and B, we need the following notations: −∗ ∧ = {α ∈ Rn : α = A−∗ (Zn )}, j m, (j, m) ∈ Z × B

I(α) = {(j, m) ∈ Z × B −∗ (Zn ) : α = A−∗ j m}. Assume that {Aj }j∈Z satisfy the following conditions: there exist β ∈ (0, 1), q ∈ Z+ such that     Aj A−1  ≤ 1, Aj A−1  ≤ β for any j ∈ Z, (2) j+1 j+q     −1 A Aj  ≤ 1, A−1 Aj  ≤ β for any j ∈ Z. j+1 j+q

(3)

Then we have Theorem 2. Suppose that ψ, ψ˜ ∈ L2 (Rn ) have the property that the following two functions are locally integrable: 2 2           . ψ(A ψ(A ω) , ω)   j j   j∈Z

j∈Z

Then for f, g ∈ B, P (f, g) converges absolutely. Moreover, 1 |det B|



˜ −1  ψ(A j ω)ψ(Aj (ω + Aj m)) = δα,0 ,

(j,m)∈I(α)

for a.e. ω ∈ Rn and for all α ∈ ∧, if and only if P (f, g) = f, g for all f, g ∈ B. By Theorem 2 and Lemma 1, the following two theorems are the immediate consequences. Theorem 3. Assume that Ψ, Ψ˜ ⊂ L2 (Rn ) are two Bessel sequences, 2 2           ψ(Aj ω) , ψ(Aj ω) j∈Z

j∈Z

are locally integrable. Then Ψ and Ψ constitute a pair of dual frames for L2 (Rn ) if and only if  1 ˜ −1  ψ(A j ω)ψ(Aj (ω + Aj m)) = δα,0 , |det B| (j,m)∈I(α)

for a.e. ω ∈ Rn and for all α ∈ ∧.

Characterization on Irregular Tight Wavelet Frames with Matrix Dilations

1031

Theorem 4. ψ ∈ L2 (Rn ) satisfies 1 |det B|



 j ω)ψ(A  j (ω + A−1 m)) = δα,0 , ψ(A j

(j,m)∈I(α)

for a.e. ω ∈ Rn and for all α ∈ ∧, if and only if {ψj,k }j,k is a tight frame with constant 1 for L2 (Rn ).

2

Proof of Main Results

In fact, we only give the proof of Theorem 2. Proof of Theorem 2: We first prove that P (f, g) is absolutely convergent. Now, let    f, ψj,k  ψ˜j,k , f , j ∈ Z. Gj := k∈Zn

Using the Parseval identity, it is easy to show

 1 ˜ −∗  j ω + B −∗ s))dω. Gj = s)ψ(A f(ω)ψ(A f(ω + A−1 j ω)( j B |det B| Rn n s∈Z

 Thus, we would like to show that j∈Z Gj is absolutely convergent. To do so, it is enough to show that the following two series are absolutely convergent: 

˜   I := f(ω)ψ(A j ω)f (ω)ψ(Aj ω)dω, Rn

j∈Z

and



˜ f(ω)ψ(A j ω)(

II := Rn



−∗  j ω + B −∗ s))dω. s)ψ(A f(ω + A−1 j B

s∈Zn \{0}

Since   2  2     ˜ ˜ −∗   j ω + B −∗ s) ≤ 1 (ψ(A s) ), ψ(Aj ω)ψ(A j ω) + ψ(Aj ω + B 2

(4)

˜ f that I is absolutely convergent. It follows from (4) and the conditions on ψ, ψ, ˜ On the other hand, for h ∈ {ψ, ψ},

  2     −∗   s) h(Aj ω) dω f (ω)f (ω + A−1 j B j∈Z,s∈Zn \{0}



= Rn



Rn



j∈Z,s∈Zn \{0}

2    1    −1  −1   h(ω) dω. (5) f (Aj ω)f (Aj (ω + B −∗ s))  |det Aj |

Since f ∈ B, it follows from (2) and (3) that there exists constant C0 > 0 such that for each j ∈ Z, ω ∈ Rn , the number of s ∈ Zn \ {0} satisfying

1032

D. Yang et al. −1 −∗  −1 s) = 0 f(A−1 j ω)f (Aj ω + Aj B

is less than C0 |det Aj |. Then  j∈Z,s∈Zn \{0}

   1   −1  −1  χF (A−1 f (Aj ω)f (Aj (ω + B −∗ s)) ≤ C j ω), (6) |det Aj | j∈Z

 2   where C = C0 f and F is compact in Rn \ {0}. By (2) and (3), {Aj }j∈Z is ∞

an MFS (see [4]). Therefore there exists constant K > 0 such that  n χF (A−1 j ω) ≤ K for any ω ∈ R \ {0}.

(7)

j∈Z

Now, it follows from (5), (6) and (7) that the series II is convergent. Hence, we can rearrange the series for P (f, f ), to obtain 1  

˜  f(ω)f(ω + α) P (f, f ) = ψ(A j ω)ψ(Aj (ω + α)) dω. |det B| n α∈∧ R (j,m)∈I(α)

Then using the polarization identity for the form P , we get the sufficiency of the theorem. Next, we prove the necessary condition. From the above discussion, we have, let P (f, g) = M (f, g) + R(f, g), where

1 M (f, g) := |det B|

and R(f, g) :=

1 |det B|





 α∈∧\{0}

 ˜  g(ω)f(ω) ψ(A j ω)ψ(Aj ω) dω,

Rn

j∈Z



g(ω)f(ω + α)

Rn

˜  ψ(A j ω)ψ(Aj (ω + α))dω.

(j,m)∈I(α)

Now, fix ω0 ∈ Rn \ {0} and let f1 (ω) =

1 χω +H (ω), μ(Hk )1/2 0 k

where μ(·) is the Lebesgue measure on Rn and n Hk = A−1 k , = {ξ ∈ R : |ξ| = 1}.

Then M (f1 , f1 ) =

1 |det B| μ(Hk )



 ˜ j ω)ψ(A  j ω)dω ψ(A

ω0 +Hk j∈Z

Characterization on Irregular Tight Wavelet Frames with Matrix Dilations

1033

and |R(f1 , f1 )| ≤

1 |det B| μ(Hk )

×







α∈∧\{0} (j,m)∈I(α)







   ˜  j (ω +α)) dω ψ(Aj ω)ψ(A

α∈∧\{0}(j,m)∈I(α) (ω0 +Hk )∩(α+ω0 +Hk )

1 ≤ |det B| μ(Hk )







α∈∧\{0} (j,m)∈I(α)

2 1/2    ˜ ψ(Aj ω) dω

(ω0 +Hk )∩(α+ω0 +Hk )

 2 1/2   . ψ(Aj (ω + α)) dω

(ω0 +Hk )∩(α+ω0 +Hk )

If (ω0 + Hk ) ∩ (ω0 + α + Hk ) = ∅, then α ∈ A−1 k ( − ). Thus for (j, m) ∈ I(α), −∗ m ∈ (Aj A−1 (Zn ) = Qj,k . k ( − )) ∩ B Δ

Using (2) and (3), there exists a constant c > 0 such that j ≥ k + c. However, |R(f1 , f1 )| ≤ ×

 2 1/2   1  ˜  ψ(Aj ω) dω |det B| μ(Hk ) ω +H 0 k j≥k+c m∈Qj,k \{0}

 2 1/2     . ψ(Aj (ω)) dω

(8)

ω0 +Hk

j≥k+c m∈Qj,k \{0}

For the first factor,  1 μ(Hk )



j≥k+c m∈Qj,k \{0}

≤ |det Ak |





j≥k+c

 2  ˜  ψ(Aj ω) dω

ω0 +Hk

  −1  C det(Aj A−1 k ) |det Aj |

j≥k+c

=C





   ˜ 2 ψ(ω) dω

Aj (ω0 +Hk )

   ˜ 2 ψ(ω) dω,

(9)

Aj (ω0 +Hk )

Here, we have used the fact #(Qj,k \ {0}) ≤ Cμ(Aj A−1 k (Δ − Δ)), which can be obtained by (2), (3). Similarly, we can estimate the second factor. Now, using (2) and (3), it is easy to prove that: (P1 ) there exist k1 , k2 ∈ Z such that the intersection of any k2 sets in {Aj (ω0 + Hk )}k≥k1 ,j≥k+c is empty. (P2 ) there exist constants k3 ∈ Z, λ > 1 such that for any k ≥ k3 , j ≥ k + c, Aj (ω0 + Hk ) ⊂ {ω : |ω| ≥ k3 λj |ω0 |}.

1034

D. Yang et al.

Using (8), (9), (P1 ) and (P2 ), we can get |R(f1 , f1 )| → 0 when k → ∞. Then, by Lebesgue theorem, we have

 1 ˜ j ω)ψ(A  j ω)dω ψ(A 1 = lim k→∞ |det B| μ(Hk ) ω0 +H k j∈Z

1  ˜  j ω0 ), = ψ(Aj ω0 )ψ(A |det B| j∈Z

which proves our claim for α = 0. This also shows that M (f, g) = f, g , for any f, g ∈ B. To complete the proof of our theorem, choose α0 ∈ ∧ \ {0}, and write R(f, g) = R1 (f, g) + R2 (f, g), where R1 (f, g) =

1 |det B|

1 R2 (f, g) = |det B|





g(ω)f(ω + α0 )

Rn

˜  ψ(A j ω)ψ(Aj (ω + α0 ))dω.

(j,m)∈I(α0 )



 α∈∧\{0,α0 }

g(ω)f(ω+α)

Rn



˜  ψ(A j ω)ψ(Aj (ω + α))dω.

(j,m)∈I(α)

 2    Next, let ω0 ∈ Rn \ {0} be any Lebesgue point of functions j∈Z ψ(A j ω) and  2   ˜  ω) ψ(A   . For fixed k ∈ Z, we define f2 , g2 as follows: j j∈Z f2 (ω + α0 ) =

1 1 χω0 +Hk (ω), g2 (ω) = χω +H (ω). 1/2 μ(Hk ) μ(Hk )1/2 0 k

Then, using Lebesgue Theorem, we have lim R1 (f2 , g2 ) =

k→∞

1 |det B|



˜  ψ(A j ω0 )ψ(Aj (ω0 + α0 )).

(j,m)∈I(α0 )

To estimate R2 (f2 , g2 ), we note that if g2 (ω)f2 (ω + α) = 0, then α ∈ α0 + A−1 k ( − ). Since α = A−1 j m ∈ ∧ \ {0, α0 }, it follows from (2), (3) that there exist J0 ∈ Z such that for any j ≤ J0 , m ∈ B −∗ (Zn ) \ {0}, A−1 / α0 + Dk , where j m∈ Dk = A−1 k ( − ).

Characterization on Irregular Tight Wavelet Frames with Matrix Dilations

1035

Hence R2 (f2 , g2 ) can be rearranged as R2 (f2 , g2 )  1 = |det B|





j≥J1 m∈(Aj α0 +Aj Dk )\{0}

J1  1 + |det B|





˜  g2 (ω)f2 (ω + α)ψ(A j ω)ψ(Aj (ω + α))dω

Rn

˜  g2 (ω)f2 (ω + α)ψ(A j ω)ψ(Aj (ω + α))dω

Rn

j=J0 m∈(Aj α0 +Aj Dk )\{0}

= R2,1 (f2 , g2 ) + R2,2 (f2 , g2 ), where J1 ∈ Z. Using (2) and (3), when k is large enough, for each j(J0 ≤ j ≤ J1 ) the number of m satisfying m ∈ (Aj α0 +Aj Dk ) is a constant which is not related on k. Thus, by Lebesgue theorem, we have limk→∞ R2,2 (f2 , g2 ) = 0. To estimate R2,1 (f2 , g2 ), we would like to prove that for each ε > 0 and k which is large enough, there exists J1 ∈ Z such that R2,1 (f2 , g2 ) ≤ ε. In fact, similar to R(f1 , f1 ), we have R2,1 (f2 , g2 ) ≤ ×

 1 |det B| |Hk |









j≥J1 m∈(Aj α0 +Aj Dk )\{0}



 2 1/2  ˜  ψ(Aj ω) dω

ω0 +Hk

2 1/2    . ψ(Aj ω) dω

ω0 +Hk

j≥J1 m∈(Aj α0 +Aj Dk )\{0}

Therefore it is enough to estimate just one of these factors. In fact, fix any k which is large enough. Using the conditions (2) and (3), there exists a constant C such that for any j ≥ J1 , # (Aj α0 + Aj Dk ) ∩ B −T (Zn ) ≤ 1 + C |det Aj | |det Ak |−1 , where #(·) is the number of elements in a given set. Then 1  |Hk |





j≥J1 m∈(Aj α0 +Aj Dk )\{0}

 2  ˜  ψ(Aj ω) dω

ω0 +Hk

2  1    ˜ −1 (1 + C |det Aj | |det Ak | ) ψ(Aj ω) dω |Hk | ω0 +Hk j≥J1

   2  1

  ˜  ˜ 2  = ψ(Aj ω) dω + ψ(ω) dω. |Hk | ω0 +Hk Aj (ω0 +Hk ) ≤

j≥J1

j≥J1

Let J1 ∈ Z such that lim

 2   ω ) ψ(A  j 0  < ε. Then j≥J1



k→∞

 j≥J1

1 μ(Hk )

ω0 +Hk

2    ˜ ψ(Aj ω) dω < ε/2.

1036

D. Yang et al.

By (P1 ), (P2 ) and Lebesgue theorem, we have   

 ˜ 2 lim ψ(ω) dω = 0. k→∞

j≥J1

Aj (ω0 +Hk )

Thus for any ε > 0, there exists J1 such that limk→∞ |R2,1 (f2 , g2 )| ≤ ε. Finally, we obtain  1 ˜  ψ(A j ω0 )ψ(Aj (ω0 + α0 )) = 0, for any α0 ∈ ∧ \ {0}. |det B| (j,m)∈I(α0 )

We complete the proof of Theorem 2.

3

Remarks

In the proof of Theorem 2, we have used the following result in some places. Remark 3.1. If (2) and (3) holds, then there exist constants C, q ∈ Z+ and β ∈ (0, 1) such that (i) for any p, j ∈ Z+ , j ≥ pq and ξ ∈ Rn , |Aj ξ| ≥ C( β1 )p |ξ|. (ii) for any p, −j ∈ Z+ , j ≤ −pq and ξ ∈ Rn ,|Aj ξ| ≤ Cβ p |ξ|. (iii) for any j, k ∈ Z, j < k and ξ ∈ Rn , we have |Ak ξ| ≥ C( β1 )

k−j q

|Aj ξ|.

j Remark 3.2. Assume that A is an expand matrix, S = Γ \∪−1 j=−∞ (A Γ ), where n Γ is a bounded measurable subset in R , supx,y∈Γ x − y < 1 and the origin is an interior point of Γ . Let ψ = χS . By Theorem 4, ψ is a tight frame function with respect to dilation sequence {Aj }j∈Z . If Di (i = 1, · · · , p) are nonsingular matrices which is commutative with A, and Asp+i = As+1 Di for any s ∈ Z, i = 1, · · · , p, then using Theorem 4 again, it is easy to show that ψ is also a tight frame function with respect to dilation sequence {Aj }j∈Z .

References 1. Hern´ andez, E., Weiss, G.: A first Course on Wavelets. CRC Press, Boca Raton (1996) 2. Frazier, M., Garrigos, G., Wang, K., Weiss, G.: A characterization of functions that generate wavelet and related expansion. J. Fourier Anal. Appl. 3 (1997) 883–906 3. Frazier, M., Jawerth, B., Weiss, G.: Littlewood-Paley theory and the study of function spaces. CBMS Regional Conference Series in Mathematics, 79, AMS, Providence, R1 (1991) 4. Yang, D., Zhou, X.: Wavelet frames with irregular matrix dilations and their stability. J. Math. Anal. Appl. 295 (2004) 97–106 5. Yang, D., Zhou, X.: Irregular wavelet frames on L2 (Rn ). Science in China Ser. A Mathematics 2 (2005) 277–287 6. Yang, D., Zhou, X.: Frame wavelets with matrix dilations in L2 (Rn ). Appl. Math. Letters 17 (2004) 631–639 7. Yang, X., Zhou, X.: An extension of Chui-Shi frame condition to nonuniform affine operations. Appl. Comput. Harmon. Anal. 16 (2004) 148–157

Feature Extraction of Seal Imprint Based on the Double-Density Dual-Tree DWT Li Runwu1, Fang Zhijun1, Wang Shengqian2, and Yang Shouyuan1 1

School of Information Technology, Jiangxi University of Finance & Economics, Nanchang, China, 330013 [email protected],[email protected], [email protected] 2 Jiangxi Science & Technology Teacher College, Nanchang, China, 330013 [email protected]

Abstract. The most important problem on seal imprint verification is to extract the imprint feature, which is independent upon varying conditions. This paper proposes a new method to extract the feature of seal imprint using the doubledensity dual-tree DWT due to its good directional selectivity, approximate shift invariance and computational efficiency properties. 16 different directions information is obtained as a seal imprint image transforms by using doubledensity dual-tree DWT. Experimental results show that their directional behaviors are much different, although the frequency distributions of true seals are similar to false seals’. This method is stable and computationally efficient for seal imprint. Keywords: Seal imprint verification, the double-density dual-tree DWT, feature extraction.

1 Introduction Seal imprint has been commonly used for personal confirmation in the Oriental countries. Seal imprint verification for document validation is sticking point. Therefore, it is highly desirable that large numbers of seal imprint are verified automatically, speedily and reliably. However, seal imprint verification is a very difficult problem [1]. Its difficulty comes from two aspects [2]: (1) The various stamping conditions may affect the quality of the seal imprint images; (2) The forgery seal imprint may be very similar to the original seal imprint. Therefore, it is important to find separate and efficient features of seal imprint. Over the past years, many studies have been focused on extracting the impression feature in frequency domain. Gabor filters [3] have been used in texture analysis due to their good directional selectivity in different frequency scales. Serkan el. [4] proposed a new feature extraction method utilizing dual tree complex wavelet transform (DT-CWT). In the conventional DWT domain, it is difficult to achieve perfect reconstruction and equal frequency responses. In addition, directional selectivity is poor in DWT Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1037–1044, 2007. © Springer-Verlag Berlin Heidelberg 2007

1038

R. Li et al.

domain. In 2004, Selesnick [5] proposed the double-density dual-tree DWT, which possess the properties of the double-density DWT as well as the DT-CWT. In this paper, the double-density dual-tree DWT is employed for decomposing a seal imprint image into the bandpass sub-images that are strongly oriented at 16 different angles. As higher directional selectivity is obtained, the dominant frequency channel and orientation of the pattern are detected with a higher precision [5]. This paper is organized as follows. In Section 2, the double-density dual-tree DWT, its dual-tree filters, and its properties are introduced. The projection operators of bandpass subimages in the double-density dual-tree DWT are presented in Section 3. The experimental results are shown in Section 4, followed by the conclusions drawn in Section 5.

2 The Double-Density Dual-Tree DWT Theory 2.1 The Dual-Tree CWT In order to have directional selectivity with Mallat’s efficient separable, it is necessary to use complex coefficient filters. Filters are chosen to be linear phase so that odd-length highpass filters have even symmetry and the even-length highpass filters have odd symmetry about their midpoints [7]. Then, the DT CWT comprises two trees of real filters, A and B, which produce the real and imaginary parts of the complex coefficients. Fig.1 shows the complete 2-D DT CWT structure over 2 levels. Each level of the trees produces 6 complex-valued bandpass subimages {D(n,m) , n=1,……,6}(where m represents the scale.) as well as two lowpass subimages A(1,m) and A(2,m) on which sub-sequent stages iterate [4]. {D(n,m) , n=1,……,6} are strongly oriented at 15, 45, 75, -15, -45, -75 degrees. The results of two-level decomposition of the DT CWT are shown in Fig. 2. It is seen that image subbands from six orientations are obtained as shown above, while the seal impression image at each level of decomposition contains two parts: the real parts and the imaginary parts. There are relative 8 image subbands at each level. Therefore, there are 12 (6×2) image highpass subbands at each level, each of which are strongly oriented at distinct angles. 2.2 The Double-Density Dual-Tree DWT The design of dual-tree filters is addressed in [6], through an approximate Hilbert pair formulation for the ‘dual’ wavelets. Selesnick [5] also proposed the double-density DWT and combined both frame approaches. The double-density complex DWT is based on two scaling functions and four distinct wavelets, each of which is specifically designed such that the two wavelets of the first pair are offset from one other by one half, and the other pair of wavelets form an approximate Hilbert transform pair [4].

Feature Extraction of Seal Imprint Based on the Double-Density Dual-Tree DWT

1039

The structure of the filter banks corresponding to the double-density complex DWT consists of two oversampled iterated filter banks operating in parallel on the same input data. The double-density DT CWT is shown in Fig.3 [6]. We see that each tree produces 9 subbands, 8 (hi1 hi2 hi3 hi4 hi5 hi6 hi7 hi8) of which are strongly oriented at 8 different angles. Then, 16 (8×2) image subbands (Tree A and Tree B) are corresponding to 16 different angles at each level. Fig.4 is the second-level decomposition results of the double-density dual-tree DWT.

、 、 、 、 、 、 、

Fig. 1. 2-D CWT

(a) The real parts of coefficients (Subimages of each orientation)

(b) The imaginary parts of coefficients (Subimages of each orientation)

Fig. 2. 2th-level decomposition subbands of Seal imprint image

Here we obtain image subbands from 16 orientations as in the previous case, but the seal imprint image at each level of decomposition contains two parts (the real parts and the imaginary parts, each which is the relative 18 sub-images at each level.)

1040

R. Li et al.

Fig. 3. 2-D Double-density Dual-tree DWT

(a) The real parts of coefficients

(b) The imaginary parts of coefficients

Fig. 4. 2th-level decomposition subbands in the double-density dual-tree DWT

Feature Extraction of Seal Imprint Based on the Double-Density Dual-Tree DWT

1041

Therefore, there are 36 sub-images at each level containing 32 high-frequency subbands, some of which are the relative 16 different angles at each level. Since higher directional selectivity is obtained, the dominant, frequency channel and orientation of the pattern is detected with a higher precision.

3 The Projections of High-Frequency Subbands The identification of seal imprint is more attention to detail information because the overall profile of seal imprint is very similar. We proposed a method utilizing the shift invariant properties and greater directional selectivity of the double-density dual-tree DWT. To reduce the computational complexity, and get most effective feature, the feature extraction of seal imprint utilize the projection operators of highfrequency subbands in the double-density dual-tree DWT. The horizontal direction projections of the subbands of six different angles as the example are demonstrated in Fig.5. 150

2500 2000

100

200

200

150

150

1500

100

100 1000 50

50 50

500

0 0

0 -50

0

-500

-50 -100

-1000 -100 -50

-150

-1500 -150

-2000 -100

0

10

20

30

40

50

60

70

100

50

-2500

0

10

20

30

40

50

60

70

-200

-200

0

10

20

30

40

50

60

70

-250

0

2000

200

250

1500

150

200

1000

10

20

30

40

50

60

70

10

20

30

40

50

60

70

150

100

500

100 50

0 0

50 0

-500

0

-50

-50 -1000

-50 -100

-1500

-100

-150

-2000 -150

0

10

20

30

40

50

60

70

-2500

-100

0

10

20

30

40

50

60

70

-200

-150

0

10

20

30

40

50

60

70

-200

0

Fig. 5. The horizontal projected vectors of high-frequency subbands 150

100

50

0

-50

-100

0

10

20

30

40

50

60

70

Fig. 6. Feature extraction of projected vectors

To reduce the computational complexity, we search for three maximum and three minimum values, whose locations are regarded as the features of seal imprint in the

1042

R. Li et al.

projected vectors of high-frequency subbands. Aforementioned process is shown in Fig.6.There are 32 high-frequency subbands which are the relative 6×32 values of the features of the seal imprint at each level. The detailed process of feature extraction is illustrated in Fig.7. As shown in Fig.4, we can get 16 different directions information of seal imprint image by the vertical/horizontal projection of the 32 high-frequency subbands.

Fig. 7. Projections of high-frequency subbands and the feature extraction

4 Experiment Result In our experiment, the feature extraction of seal imprint is in the second level of double-density dual-tree DWT. The seal imprint images are of size 256×256.The following images are the three true seal imprints and their corresponding false seal imprints. In Fig.8, we see that false seals are similar to the given true seals too much. The performance of this method has been investigated using a database containing 100×3 seal imprint images (three groups are A, B and C respectively, each contains 50 true seals and 50 false seals.), and promising results have been obtained. The match ratio of the features is used for the classification algorithm. To reduce FAR (False Acceptance Rate, set the threshold of 46% matching ratio for B-Group and A-Group, and 50% matching ratio for C-Group. The experimental results are shown in Table 1.

Feature Extraction of Seal Imprint Based on the Double-Density Dual-Tree DWT

(a0) true seal imprint

(b0) true seal imprint

(a1) false seal imprint

(b1) false seal imprint

1043

(c0) true seal imprint

(c1) false seal imprint

Fig. 8. Seal verification of three groups Table 1. Results of seal imprint verification FAR (%)

FRR (%)

Recognition rate

A

0

8

96%

B C

0 0

6 14

97% 93%

Annotation: FAR (False Acceptance Rate), FRR (False Rejection Rate)

In Table 1, we can see that recognition rates of rotundity-seals (A) and ellipse-seals (B) are more high than square -seals’ (C). As rotundity-seals (A) and ellipse-seals (B) possess more salient features of direction, their features are separated highly in comparison with square -seals’.

5 Conclusion In this paper, we proposed a feature extraction method of seal imprint based on the double-density dual-tree DWT. Through the double-density dual-tree DWT of seal imprint images, 16 different directions information is obtained. Although the frequency distributions of true seals are similar to false seals’, their directional behaviors are much different. The method is stable and computationally efficient. The

1044

R. Li et al.

experimental results demonstrate that the features which have directional selectivity properties extracted by this method are appropriate for verification of similar structure. Acknowledgments. This work is supported by the National Natural Science Foundation of China (No. 60662003, 60462003, 10626029), the Science & Technology Research Project of the Education Department of Jiangxi Province (No.2006-231), Jiangxi Key Laboratory of Optic-electronic & Communication and Jiangxi University of Finance & Economics Innovation Fund.

References 1. Fam T. J., Tsai W H: Automatic Chinese seal identification [J]. Computer Vision Graphics and Image Processing, 1984, 25(2):311 - 330. 2. Qing H., Jingyu Y., Qing Z.: An Automatic Seal Imprint Verification Approach [J]. Pattern Recognition, 1995, 28 (8):1251 - 1265. 3. Jain, A.K. and Farrokhnia. F., “Unsupervised texture segmentation using Gabor filter,” Pattern Recognition, 1991.24, 1167-1186 4. Hatipoglu S., Mitra S K. and Kingsbury N.: Classification Using Dual-tree Complex Wavelet Transform. Image Processing and Its Applications, Conference Publication No. 465 @IEE 1999 5. Selesnick I. W.: The Double-Density Dual-Tree DWT” in IEEE Transactions on Signal Processing, 52 (5): 1304-14, May 2004. 6. Selesnick I. W.: Hilbert Transform Pairs of Wavelet Bases, Signal Processing Letters, vol. 8, no. 6, pp. 170.173, Jun. 2001. 7. Kingsbury N.G.: Image Processing with Complex Wavelets. Phil. Trans. Royal Society London A, September 1999.

Vanishing Waves on Semi-closed Space Intervals and Applications in Mathematical Physics Ghiocel Toma Department of Applied Sciences, Politehnica University, Bucharest, Romania

Abstract. Test-functions (which differ to zero only on a limited interval and have continuous derivatives of any order on the whole real axis) are widely used in the mathematical theory. Yet less attention was given to intuitive aspects on dynamics of such test functions or of similar functions considered as possible solution of certain equations in mathematical physics (as wave equation). This study will show that the use of wave equation on small space interval considered around the point of space where the sources of the generated field are situated can be mathematically represented by vanishing waves corresponding to a superposition of travelling test functions. As an important consequence, some directions for propagating the generated wave appears and the possibility of reverse radiation being rejected. Specific applications for other phenomena involving wave generation (as the Lorentz formulae describing the generation of a wave with different features after the interaction with the observer’s material medium are also presented. Keywords: vanishing waves, test functions, semiclosed intervals.

1

Introduction

Test-functions (which differ to zero only on a limited interval and have continuous derivatives of any order on the whole real axis) are widely used in the mathematical theory of distributions and in Fourier analysis of wavelets. Yet such test functions, similar to the Dirac functions, can’t be generated by a differential equation. The existence of such an equation of evolution, beginning to act at an initial moment of time, would imply the necessity for a derivative of certain order to make a jump at this initial moment of time from the zero value to a nonzero value. But this aspect is in contradiction with the property of test-functions to have continuous derivatives of any order on the whole real axis, represented in this case by the time axis. So it results that an ideal test-function can’t be generated by a differential equation (see also [1]); the analysis has to be restricted at possibilities of generating practical test-functions (functions similar to test-functions, but having a finite number of continuous derivatives on the whole real axis) useful for wavelets analysis. Due to the exact form of the derivatives of test-functions, we can’t apply derivative free algorithms [2] or algorithms which can change in time [3]. Starting from the exact mathematical expressions Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1045–1052, 2007. c Springer-Verlag Berlin Heidelberg 2007 

1046

G. Toma

of a certain test-function and of its derivatives, we must use specific differential equations for generating such practical test-functions. This aspect is connected with causal aspects of generating apparently acausal pulses as solutions of the wave equation, presented in [4]. Thus, such testfunctions, considered at the macroscopic scale (that means not as Dirac-functions) can represent solutions for certain equations in mathematical physics (an example being the wave-equation). The main consequence of this consists in the possibility of certain pulses to appear as solutions of the wave-equation under initial null conditions for the function and for all its derivatives and without any free-term (a source-term) to exist. In order to prove the possibility of appearing acausal pulses as solutions of the wave-equation (not determined by the initial conditions or by some external forces) we begin by writing the wave-equation 1 ∂2φ ∂2φ − 2 2 =0 2 ∂x v ∂t

(1)

for a free string defined on the length interval (0, l) (an open set), where φ represents the amplitude of the string oscillations and v represents the velocity of the waves inside the string medium. At the initial moment of time (the zero moment) the amplitude φ together with all its derivatives of first and second order are equal to zero. From the mathematical theory of the wave-equation we know that any solution of this equation must be a superposition of a direct wave and of a reverse wave. We shall restrict our analyze at direct waves and consider a supposed extension of the string on the whole Ox axis, φ being defined by the function  1 exp ( (x−vt+1) 2 −1 ) for |x − vt + 1| < 1 (2) φ(τ ) = 0 for |x − vt + 1| ≥ 1 where t ≥ 0. This function for the extended string satisfies the wave-equation (being a function of x-vt , a direct wave). It is a continuous function, having continuous partial derivatives of any order for x ∈ (−∞, ∞) and for x ≥ 0. For x ∈ (0, l) (the real string)the amplitude φ and all its derivatives are equal to zero at the zero moment of time, as required by the initial null conditions for the real string (nonzero values appearing only for x ∈ (−2, 0) for t = 0, while on this interval |x − vt + 1| = |x + 1| < 1). We can notice that for t = 0 the amplitude φ and its partial derivatives differ to zero only on a finite space interval, this being a property of the functions defined on a compact set (test functions). But the argument of the exponential function is x − vt ; this implies that the positive amplitude existing on the length interval (−2, 0) at the zero moment of time will move along the Ox axis in the direction x = +∞. So at some time moments t1 < t2 < t3 < t4 < . . . after the zero moment the amplitude φ will be present inside the string, moving from one edge to the other. It can be noticed that the pulse passes through the real string and at a certain time moment tf in (when the pulse existing at the zero moment of time on the length interval (−2, 0) has moved into the length interval (l, l + 2)) its action upon the real string ceases. We must point the fact that the limit points x = 0 and x = l are not considered

Vanishing Waves on Semi-closed Space Intervals and Applications

1047

to belong to the string; but this is in accordance with the rigorous definition of derivatives (for these limit points can’t be defined derivatives as related to any direction around them). This point of space (the limit of the open space interval considered) is very important for our analysis, while we shall extend the study to closed space intervals. Considering small space intervals around the points of space where the sources of the generated field are situated (for example, the case of electrical charges generating the electromagnetic field), it will be shown that causal aspects require the logical existence of a certain causal chain for transmitting interaction from one point of space to another, which can be represented by mathematical functions which vanishes (its amplitude and all its derivatives) in certain points of space. From this point of space, an informational connection for transmitting the wave further could be considered (instead of a tranmission based on certain derivatives of the wave). Thus a kind of granular aspect for propagation along a certain axis can be noticed, suitable for application in quantum theory. As an important consequence, by a multiscale analysis and the use of non-Markov systems, some directions for propagating the generated wave will appear and the possibility of reverse radiation will be rejected. Finally. specific applications for other phenomena involving wave generation (as the Lorentz formulae describing the generation of a wave with different features after the interaction with the observer’s material medium) will be also presented.

2

Test-Functions for Semi-closed Space Intervals

If we extend our analysis to closed intervals by adding the limit of the space interval to the previously studied open intervals (for example by adding the points x = 0 and x = l to the open interval (0, l), we should take into account the fact that a complete mathematical analysis usually implies the use of a certain function f (t) defined at the limit of the working space interval (the point of space x = 0, in the previous example). Other complete mathematical problems for the wave equation or for similar equations in mathematical physics use functions f0 (t), fl (t) corresponding to both limits of the working space intervals (the points of space x = 0 and x = l in the previous example) or other supplementary functions. The use of such supplementary functions defined on the limit of the closed interval could appear as a possible explanation for the problem of generating acausal pulses as solutions of the wave equation on open intervals. The acausal pulse presented in the previous paragraph (similar to wavelets) travelling along the Ox axis requires a certain non-zero function of time f0 (t) for the amplitude of the pulse for the limit of the interval x = 0. It could be argued that the complete mathematical problem of generating acausal pulses for null initial conditions on an open interval and for null functions f0 (t) and fl (t) corresponding to function φ (the pulse amplitude) at the limits of the interval x = 0 and x = l respectively, would reject the possibility of appearing the acausal pulse presented in the previous paragraph. The acausal pulse φ presented implies non-zero values

1048

G. Toma

for f0 and fl at the limit of the closed interval at certain time moments, which represents a contradiction with the requirement for these functions f0 and fl to present null values at any time moment. By an intuitive approach, null external sources would imply null values for functions f0 and fl and (as a consequence) null values for the pulse amplitude φ. Yet it can be easily shown that the problem of generating acausal pulses on semi-closed intervals can not be rejected by using supplementary requirements for certain functions f (t) defined at one limit of such space intervals. Let us simply suppose that instead of function  1 exp ( (x−vt+1) 2 −1 ) for |x − vt + 1| < 1 (3) φ(τ ) = 0 for |x − vt + 1| ≥ 1 presented in previous paragraph we must take into consideration two functions φ0 and φl defined as  1 exp ( (x−vt+m) 2 −1 ) for |x − vt + 1| < 1 φ0 (τ ) = (4) 0 for |x − vt + m| ≥ 1 

and φl (τ ) =

1 − exp ( (x+vt−m) 2 −1 ) for |x − vt + 1| < 1 0 for |x + vt − m| ≥ 1

(5)

with m selected as m > 0, mn − 1 > l (so as both functions φ0 and φl to have non-zero values outside the real string and to be asymmetrical as related to the point of space x = 0. While function φ0 corresponds to a direct wave (its argument being (x − vt)) and φl corresponds to a reverse wave (its argument being (x + vt)) it results that both functions φ0 and φl arrive at the same time at the space origin x = 0, the sum of these two external pulses being null all the time (functions φ0 and φl being asymmetrical, φ0 = −φl ). So by requiring that φ(t) = 0 for x = 0 (the left limit of a semi-closed interval [0, l) ) we can not reject the mathematical possibility of appearing an acausal pulse on a semiclosed interval. This pulse is in fact a travelling wave propagating from x = −∞ towards x = ∞ which vanishes at the point of space x = 0. Moreover, its derivatives are also equal to zero at this point of space for certain time moments (when both travelling pulses cease their action in the point of space x = 0). This pulse is a solution of the wave-equation on the semi-closed interval [0, l), and can be very useful for considering a transmission of interaction on finite space-intervals (in our case the interaction being transmitted from x = l towards x = 0). From this point of space, at the time moments when the amplitude and all its derivatives are equal to zero the interaction can be further transmitted by considering an informational connection; the mathematical form of the pulse is changed, a new wave should be generated for the adjacent space interval and a mathematical connection for transmitting further the interaction ( towards x = −∞) is not possible while the pulse amplitude and all its derivatives vanish at this time moments at the point of space x = 0. This aspect implies a step-by

Vanishing Waves on Semi-closed Space Intervals and Applications

1049

step transmission of interaction starting from an initial semi-closed interval (its open limit corresponding to the source of the fieldd, for example) to other space intervals. This corresponds to a granular aspect of space suitable for applications in quantum physics, where the generation and annihilation of quantum particles should be considered on limited space-time intervals. For this purpose, specific computer algorithms and memory cells corresponding to each space interval should be used. The informational connection from one space interval to another should be represented by computer operations of data initialization.

3

Aspects Connected with Spherical Waves

A possible mathematical explanation for this aspect consists in the fact that we have used a reverse wave (an acausal pulse) propagating from x = ∞ towards x = −∞, which is first received at the right limit x = l of the semi-closed interval [0, l) before arriving at the point of space x = 0. It can be argued that in case of a closed space interval [0, l] we should consider the complete mathematical problem, consisting of two functions f0 (t), fl (t) corresponding to both limits of the working space intervals (the points of space x = 0 and x = l. But in fact the wave equation corresponds to a physical model valid in the three-dimensional space, under the form 1 ∂2φ ∂2φ ∂2φ ∂2φ + 2 + 2 − 2 2 =0 2 ∂x ∂y ∂z v ∂t

(6)

and the one-dimensional model previously used is just an approximation. Moreover, the source of the field is considered at a microscopic scale (quantum particles like electrons for the case of the electromagnetic field, for example) and the emitted field for such elementary particles presents a spherical symmetry. Transforming the previous equation in polar coordinates and supposing that the function φ depends only on r (the distance from the source of the field to the point of space where this emitted field is received), it results ∂2U 1 ∂2U − 2 2 =0 2 ∂r v ∂t

(7)

U = rϕ

(8)

where An analysis of the field emitted from the space origim towards a point of space r = r0 (where the field is received) should be performed on the space interval (0, r] (a semi-closed interval); the point of space r = 0 can not be included in the working interval as long as the solution φ(r) for the field is obtained by dividing the solution U (r) of the previous equation (in spherical coordinates) through r (the denominator of the solution φ being zero, some supplementary aspects connected to the limit of functions should be added, but still without considering a solution for the space origin).

1050

G. Toma

Thus an asymmetry in the required methods for analyzing phenomena appears. In a logical manner, by taking also into consideration the free-term (corresponding to the source of the field) situated in the point of space x = 0 (the origin) it results that the use of function depending on x − vt (mentioned in the previous paragraph) or r − vt (for the spherical waves) represents also a limit for the case of a sequence of small interactions acting as external source (freeterm)- changes in the value of partial derivatives as related to space coordinates - changes in the partial derivatives of the amplitude as related to time - changes in the value of the function, so as the possibility of appearing acausal pulses (not yet observed) to be rejected. Such a causal chain can be represented in a mathematical form as a differential equation able to generate functions similar to test functions, defined as practical test functions only as an approximation at a greater scale of space-time for the case when the length and time intervals corresponding to such equations with finite differences are very small. Moreover, a certain direction for the transmission of interaction appearing, it results that the possibility of reverse radiation (a reverse wave generated by points of space where a direct wave has arrived) should be rejected in a logical manner (a memory of previous phenomena determining the direction of propagation). Mathematically, an analysis at a small spatial and temporal scale based on continuous functions for transmitting interactions from one point of space to another (similar to continuous wave function in quantum physics describing the generation and annihilation of elementary particles) should be described by non-Markov processes (phenomena which should be analyzed by taking into account the evolution in a past time interval).

4

Applications at Relativistic Transformation of Waves

An application of such non-Markov processes should be the analysis of Lorentz transformation in special relativity, when a certain wave-train interacts with the observer’s material medium. The usual interpretation of special relativity theory considers that the Lorentz formulae describe the transformation of the space-time coordinates corresponding to an event when the inertial reference system is changed. These formulae are considered to be valid at any moment of time after a certain synchronization moment (the zero moment) irrespective to the measuring method used. However, there are some problems connected to the use of mechanical measurements on closed-loop trajectories with analysis performed on adjoined small time intervals. For example, if we consider that at the zero moment of time, in a medium with a gravitational field which can be neglected, two observers are beginning a movement from the same point of space, in opposite directions, on circular trajectories having a very great radius of curvature, so as to meet again after a certain time interval, we can consider the end of each small time interval as a resynchronization moment and it results that time dilation appears on each small time interval. Yet if we consider that the time intervals measured after a resynchronization procedure can be added to the previously measured time intervals (the result being considered as related to

Vanishing Waves on Semi-closed Space Intervals and Applications

1051

the initial time moment) a global time dilation appears. If the time is measured using the age of two plates, it results that the plate in a reference system S2 is older than the other in reference system S1 , (having a less mechanical resistance) and it can be destroyed by it after both observers stop their circular movements. However, the same analysis can be made by starting from another set of small time intervals considered in the other reference system, and finally results that the plate in reference system S1 is older than the other in reference system S2 , (having a less mechanical resistance) and it can be destroyed by it after both observers stop their circular movements. But this result is in logic contradiction with the previous conclusion, because a plate can not destroy and in the same time be destroyed by another one. A logical attempt for solving this contradiction can be made by considering that Lorentz formulae are valid only for electromagnetic phenomena (as in the case of the transversal Doppler effect) and not in case of mechanical phenomena or any phenomena involving memory of previous measurements. Using an intuitive approach which considers that the Lorentz transformation represents physical transformation of a wave-train when this interacts with the observer’s material medium, such logical contradiction can be avoided (see [5], [6] for more details). Yet the memory of past events can not be totally neglected. The transformation of a received wave into another wave which moves along the same direction, with certain mathematical expressions describing how space-time coordinates corresponding to the case when the received wave would have been not affected by interaction are transformed into space-time coordinates corresponding to the transformed wave train (according to the Lorentz formulae valid on the space and time intervals corresponding to the received wave-train) requires a certain memory for the received wave train necessary for performing the transformation (other way no local time dilation would appear). This aspects is similar to the requirement of using non-Markov processes for justifying how a certain direction of propagation for the generated wave appears.

5

Conclusions

This study has shown that some solutions of the wave equation for semi-closed space interval considered around the point of space where the sources of the generated field are situated (for example, the case of electrical charges generating the electromagnetic field) can be mathematically represented by vanishing waves corresponding to a superposition of travelling test functions. It is also shown that this aspect requires the logical existence of a certain causal chain for transmitting interaction from one point of space to another. As an important consequence, by a multiscale analysis and the use of non-Markov systems, certain directions for propagating the generated wave appeared and the possibility of reverse radiation was rejected. Specific applications for other phenomena involving wave generation (as the Lorentz formulae describing the generation of a wave with different features after the interaction with the observer’s material medium) have been also presented. Unlike other mathematical problems (Cauchy problem) based on

1052

G. Toma

long-range dependence (see also [7], where statistical aspects are also taken into consideration), this study presents aspects connected to short-range interactions. Asymptotic properties are taken into account for the mathematical problem for functions having the limit ∞ (null denominator at the limit of the working semiclosed interval) instead of an approach based on relaxation phenomena, as in [8]. In future studies, such aspects will be extended to mathematical models describing step changes in a certain environment (similar to aspects presented in [9], [10] with step changes presented in [11]. The aspects presented in this study can be extended at closed space intervals, by considering that at the initial moment of time, at one of the spatial limits of the interval arrives a direct wave and a reverse wave (asymmetrical as related to this point of space) represented both by sequences of extended Dirac pulses having the same spacelemgth d = L/k (k being an integer and L being the length of the whole interval). As a consequence, after a certain time interval, a set of oscillations represented by stationary waves with null derivatives of certain orders at both spatial limits will apeear. Acknowledgment. This work was supported by the National Commission of Romania for UNESCO, through a pilot grant of international research involving Politehnica University, Salerno University, IBM India Labs and Shanghai University.

References 1. Toma, C. : Acausal pulses in physics-numerical simulations, Bulgarian Journal of Physics (to appear) 2. Morgado, J. M., Gomes, D.J. : A derivative - free tracking algorithm for implicit curves with singularities, Lecture Notes in Computer Science 3039 (2004) 221–229 3. Federl, P., Prudinkiewiez, P. : Solving differential equations in developmental models of multicellular structures using L-systems, Lecture Notes in Computer Science 3037 (2004) 65–82 4. Toma, C.: The possibility of appearing acausal pulses as solutions of the wave equation, The Hyperion Scientific Journal 4 1 (2004), 25–28 5. Toma, C.: A connection between special relativity and quantum theory based on non-commutative properties and system - wave interaction, Balkan Physics Letters Supplement 5 (1997), 2509–2513 6. Toma, C.: The advantages of presenting special relativity using modern concepts, Balkan Physics Letters Supplement 5 (1997), 2334–2337 7. Li, M., Lim, S.C.: Modelling Network Traffic Using Cauchy Correlation Model with Long-Range Dependence, Modern Physics Letters,B 19 (2005), 829–840 8. Lim, S.C., Li, M. : Generalized Cauchy Process and Its Application to Relaxation Phenomena, Journal of Physics A Mathematical and General 39 (2004), 2935–2951 9. Lide, F., Jinhai, L., Suosheng C.: Application of VBA in HP3470A Data Acquisition System, Journal of Instrunebtation and Measurements 8 (2005), 377–379 10. Lide, F., Wanling, Z., Jinha, L., Amin, J.: A New Intelligent Dynamic Heat Meter, IEEE Proceedings of ISDA 2006 Conference, 187–191 11. Xiaoting, L., Lide, F.: Study on Dynamic Heat Measurement Method after Step Change of System Flux, IEEE Proceedings of ISDA 2006 Conference, 192–197

Modelling Short Range Alternating Transitions by Alternating Practical Test Functions Stefan Pusca Politehnica University, Department of Applied Sciences, Bucharest, Romania

Abstract. As it is known, practical test-functions [1] are very useful for modeling suddenly emerging phenomena. By this study we are trying to use some specific features of these functions for modeling aspects connected with transitions from a certain steady-state to another, with emphasis on he use of short range alternating functions. The use of such short range alternating functions is required by the fact that in modern physics (quantum physics) all transitions imply the use of certain quantum particles (field quantization) described using associated frequencies for their energy. Due to this reason, a connection between a wave interpretation of transitions (based on continuous functions0 and corpuscle interpretation of transitions (involving creation and annihilation of certain quantum particles) should be performed using certain oscillations defined on a limited time interval corresponding to the transition from one steady-state to another. Keywords: transitions, test functions, short range phenomena.

1

Introduction

As it is known, basic concepts in physics connected with interaction are the wave and corpuscle concepts. In classical physics the corpuscle term describes the existence of certain bodies subjected to external forces or fields, and the wave concept describes the propagation of oscillations and fields. In quantum physics, these terms are closely interconnected, the wave train associated to a certain particle describes the probability of a quantum corpuscle (an electron or a photon) to appear; the results of certain measurements performed upon the quantum particle are described by the proper value of the operators corresponding to the physical quantity to be measured, the action of these operators having yo be considered in a more intuitive manner also. Certain problems connected with measurement procedures on closed-loop trajectories in special relativity and non-commutative properties of operators in quantum physics [2] imply a more rigorous definition of measurement method and of the interaction phenomena, classified from the wave and from the corpuscular aspect of matter, so as to avoid contradiction generated by terminological cycles [3]. Logic definition for the class of measuring methods based on the wave aspect of matter and for the class of measuring methods based on the corpuscular aspect of matter upon interaction Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1053–1059, 2007. c Springer-Verlag Berlin Heidelberg 2007 

1054

S. Pusca

phenomena, based on considerations about a possible memory of previous measurements (operators) in case of a sequence of received pulses were presented in [4], trying to obtain expressive pattern classes (similar to those presented in [5]). As a consequence, aspects connected with memory of previous measurements corresponding to action of systems upon received wave-trains have to be raken into consideration. Moreover, this aspect implies an intuitive interpretation for the dependence of the mass of a body inside a reference system. Thus, it was shown that for the case when the Lorentz transformation doesn’t generate a pulse (for example when the relative speed between the material body and the wave is equal to c, the speed of light in vacuum), the mass m is equal to ∞ , which means that no interaction due to the received pulse exists. This manner the notion on infinite mass is connected with the absence of interaction) [8]. So m = ∞ for a body inside a reference system S shows that we can’t act upon the material body using wave pulses emitted in system S; however, changes in the movement of the body (considered in system S ) due to other external forces seem to be allowed. The absence of interaction is connected also with absence of estimation for space coordinates of the wave source (the general case being presented in [9]). This aspect can be considered as a suddenly emerging phenomenon, while the interaction disappears when the relative speed v between the system which emits the wave and the system which receives it becomes equal to c. Yet the problem is more complex if a high energy pulse interacts with a single or with a small number of elementary (small) particles. In this case the total energy of the particles (according to relativistic expression E = mc2 can be much smaller than the energy of the received pulse which interacts with them. For a correct analysis (according to previous considerations) the small (elementary) particles should be considered as associated wave trains interacting with a high-energy environment (some scaling aspects [10] appearing). The high energy pulses would be much less affected by interaction, which means that it is the element performing the transformation; associated wave-train of the particles would be the much more affected by interaction, being the element which undergoes the transformation. In the most general case, the study of wave transformations according to Lorentz formulae in a certain environment must be performed in the reference systems where the total momentum is zero (by analogy with the study of collisions in the reference system associated to the center of mass). For an improved analysis of phenomena we must find an approach able to connect transitions corresponding from one steady state of a system to another using a formalism based on functions defined on limited time intervals. Smooth transitions (based on practical test-function, similar to wavelets) where presented in [2], but that study has presented an algorithm for generating smooth transitions for any derivative of the function f describing the transitions, avoiding alternating function. On the contrary, for allowing an approach able to explain also the creation and annihilation of quantum particles by interactions in modern physics (defined on an extremely small time interval dt around the interaction moment

Modelling Short Range Alternating Transitions

1055

of time), the use of such alternating functions is recommended, so as to appear certain frequencies usually associated to energy of quantum particles in modern physics. So the algorithm presented in [9] should be improved.

2

Connections with Test Functions

For modeling phenomena connected with wave-train transformation in a certain environment, we could use the formalism of topological solitary waves with arbitrary charge [3] or of harmonic wavelets [4]. However, the disappearance of interaction when the relative speed v equals c implies the absence of certain state-variables at that very moment of time when v = c, when v pass from a value less than c to a value greater than c; this is similar to aspects connected with integration of functions similar to test functions on a working interval [5] - a certain number of derivatives vanishing at the end of this interval. Specific features from modeling solitary waves in composite materials [6] could be useful, avoiding mathematical possibilities of generating acausal pulses [7] (the Lorentz transformation of a received wave-train does not generate any wave without a certain received wave to exist, and it acts instantly). Stochastic aspects of Schroedinger equation imply a probability of measuring a certain value for a physical quantity connected with an associated wave, not a probability of appearing different associated waves (see [8] for a wavelets analysis of Schroedinger equation). From basic mathematics it is known that the product ϕ(t)g(t) between a function g(t) which belongs to C ∞ class and a test-function ϕ(t) which differs to zero on (a, b) is also a test-function, because: a) it differs to zero only on the time interval (a, b) where ϕ(t) differs to zero (if ϕ(t) is null, then the product ϕ(t)g(t) is also null) b) the function ϕ(t)g(t) belongs to the C ∞ class of functions, while a derivative of a certain order k can be written as (ϕ(t)g(t))(k) =

k 

Ckp ϕ(t)(p) g(t)(k−p)

(1)

p=0

(a sum of terms represented by a product of two continuous functions). Yet for practical cases (when phenomena must be represented by differential equations), the ϕ(t) test functions must be replaced by a practical test functions f (t) ∈ C n on R (for a finite n - considered from now on as representing the order of the practical test function) having the following properties: a) f is nonzero on (a, b) b) f satisfies the boundary conditions f (k) (a) = f (k) (b) = 0 for k = 0, 1, ..., n and c) f restricted to (a, b) is the solution of an initial value problem (i.e. an ordinary differential equation on (a, b) with initial conditions given at some point in this interval). The generation of such practical test functions is based on the study of differential equations satisfied by the initial test functions, with the initial moment

1056

S. Pusca

of time chosen at a time moment close to the t = a moment of time (when the function begins to present non-zero values). By using these properties of practical test-functions, we obtain the following important result for a product f (t)g(t) between a function g(t) which belongs to C ∞ class and a practical test-function of n order f (t) which differs to zero on (a, b): General Property for Product: The product g(t)f (t) between a function g(t) ∈ C ∞ and a practical test function f of order n is represented by a practical test function of order n. This is a consequence of the following two properties: a) the product g(t)f (t) differs to zero only on the time interval (a, b) on which f (t( differs to zero. b) the derivative of order k for the product g(t)f (t) is represented by the sum (f (t)g(t))(k) =

k 

Ckp f (t)(p) g(t)(k−p)

(2)

p=0

which is a sum of terms representing products of two continuous functions for any k ≤ n, ( n being the order of the practical test-function f ) - only for k > n discontinuous functions can appear in the previous sum. Integral properties of practical test functions of certain order has been presented in [9]. For this, it was shown that the integral ϕ(t) of a test function φ(t) (which differs to zero on (a, b) interval) is a constant function on the time intervals (−∞, a] and [b, +∞); it presents a certain variation on the (a, b) time interval, from a constant null value to a certain Δ quantity corresponding to the final constant value. This aspects was used for modeling smooth transitions from a certain state to another when almost all derivatives of a certain function are equal to zero at the initial moment of time. The absence of interaction at the time moment tin (when v = c), considered as initial moment of time, suggested that all (or a great number) of derivatives of functions x = x(t), y = y(t), z = z(t) (the space coordinates) are null for t = tin and present a certain variation at a time interval close to tin (when v is no longer equal to c. In the general case when a function f and a finite number of its derivatives f (1) , f (2) , ..f (n) present the variations from null values to values Δ, Δ1 , Δ2 , ...Δn on the time interval [−1, 1], a certain function fn which should be added to the null initial function so as to obtain a variation Δn for the derivative of n order was studied. By multiplying the exponential bump-like function (a test-function on [−1, 1]) with the variation Δn of the derivative of n order and by integrating this product n + 1 times we obtain: rating this product n + 1 times we have simply obtained: -after the first integration: a constant value equal to Δn at the time moment t = 1 (while the integral of the bump-like test function on [−1, 1] is equal to 1, and a null variation on (1, +∞).

Modelling Short Range Alternating Transitions

1057

- after the second integration (when we integrate the function obtained at previous step): a term equal to Δn (t − 1) and a term equal to a constant value cn1 (a constant of integration) on the time interval (1, +∞). -after the n +1 integration: a term equal to Δn (t − 1)n /n! and a sum of terms having the form cni (t − 1)i /i! for i ∈ N, i < n (cni being constants of integration) on the time interval (1, +∞) and so on. Corrections due to the fact that function fn previously obtained has non-zero variations dn−1 , dn−2 , ..d1 for its derivatives of order n − 1, n − 2, ..1 these values were substracted from the set Δn−1 , Δn−2 , ..Δ1 before passing to the next step, when the bump-like function was multiplied by the corrected value Δn−1 − dn−1 , Finally, by integrating this product n times we obtained in a similar manner a function with a term equal to Δn (t − 1)n−1 /(n − 1)! and a sum of terms having the form cni (t − 1)i /i! for i ∈ N, i < n−1 (cni being constants of integration) on the time interval (1, +∞), being noticed that the result obtained after n integration possess the n − 1 order derivative equal to Δn−1 , a smooth transition for this derivative from the initial null value being performed. So the second function which must be added to the initial null function is the integral of n-1 order for the bump-like function multiplied by this variation Δn−1 (noted as fn−1 ). The function f2 has a null value for the derivative of n order, so the result obtained at first step is not affected. We must take care again to the fact that the function fn−1 previously obtained has non-zero variations d1n−1 , d1n−2 , ..d11 for its derivatives of order n − 1, n − 2, ..1 and so we must once again substract these values from the previously corrected set Δn−1 − dn−1 , Δn−2 − dn−2 , ..Δ1 − d1 before passing to the next step. Finally we obtain all functions fn+1 , fn , ...f1 which represent the terms of function f modeling the smooth transition from an initial null function to a function having a certain set of variations for a finite number of its derivatives on a small time interval. The procedure can be also applied for functions possessing a finite number of derivatives within a certain time interval by time reversal ( t being replaced with −t).ere substracted from the set Δn−1 , Δn−2 , ..Δ1 before passing to the next step, when the bump-like function was multiplied by the corrected value Δn−1 − dn−1 , Finally, by integrating this product n times we obtained in a similar manner a function with a term equal to Δn (t − 1)n−1 /(n − 1)! and a sum of terms having the form cni (t − 1)i /i! for i ∈ N, i < n − 1 (cni being constants of integration) on the time interval (1, +∞), being noticed that the result obtained after n integration possess the n − 1 order derivative equal to Δn−1 , a smooth transition for this derivative from the initial null value being performed. So the second function which must be added to the initial null function is the integral of n-1 order for the bump-like function multiplied by this variation Δn−1 (noted as fn−1 ). The function f2 has a null value for the derivative of n order, so the result obtained at first step is not affected. We must take care again to the fact that the function fn−1 previously obtained has non-zero variations d1n−1 , d1n−2 , ..d11 for its derivatives of order n − 1, n − 2, ..1 and so we must once again substract these values from the previously corrected set Δn−1 − dn−1 , Δn−2 − dn−2 , ..Δ1 − d1 before passing to the next step. Finally we obtain all functions fn+1 , fn , ...f1 which represent the terms of function f modeling the smooth transition from an

1058

S. Pusca

initial null function to a function having a certain set of variations for a finite number of its derivatives on a small time interval. The procedure can be also applied for functions possessing a finite number of derivatives within a certain time interval by time reversal ( t being replaced with −t). Next step consists in considering the previously obtained functions the argument of a complex function F . In [10] has been presented the similitude between coefficients appearing in case of partial fractal decomposition and the electric field intensity E depending on distance a − b (in electrostatics). If we write the decomposition       1 1 1 1 1 = − (3) (x − a)(x − b) a−b x−a a−b x−b and compare the coefficient 1/(a − b) of each term with the electromagnetic field    Q Q (4) E= 4π a−b for the classical case in electrostatics when in a point situated in xd = b is received an electric field emitted by a body with Q charge, situated in a point xs = a (the unidimensional case) - without taking the sign into consideration we can notice that coefficient 1/(a − b) is also the coefficient of Q/(4π). This has suggested that such coefficients of 1/(x − a) correspond to certain physical quantities noticed in point x = b and associated to a field emitted in the point x = a. It also suggested that the whole system Sa,b should be described as    Q 1 Sa,b = (5) 4π (x − a)(x − b) and it can be decomposed in phenomena taking place in point x = a or x = b by taking into consideration the coefficient of 1/(x − a) or 1/(x − b) from partial fraction decomposition. Mathematically, these coefficients ca , cb can be written as (6) ca = lim (x − a)Sa,b , cb = lim (x − b)Sa,b x→a

x→b

By simply replacing coefficients a, b appearing in denominators expressions with M1 exp iF1 , M2 exp iF2 (complex functions with arguments F1 , F2 determined using the previous algorithm) we obtain a smooth transition for the denominator of partial fractions involved in interaction, certain frequencies appearing. Thus is an important step for improving aspects presented in [10], where the necessity of using functions depending of time for coefficients appearing on denominators expressions for partial functions corresponding to physical quantities has been already mentioned.

3

Conclusions

This study has shown that the use of such short range alternating functions is required by the fact that in modern physics (quantum physics) all transitions

Modelling Short Range Alternating Transitions

1059

imply the use of certain quantum particles (field quantization) described using associated frequencies for their energy. Due to this reason, a connection between a wave interpretation of transitions (based on continuous functions0 and corpuscle interpretation of transitions (involving creation and annihilation of certain quantum particles) has been performed using certain oscillations defined on a limited time interval corresponding to the transition from one steady-state to another. Acknowledgment. This work was supported by the National Commission of Romania for UNESCO, through a pilot grant of international research involving Politehnica University, Salerno University, IBM India Labs and Shanghai University.

References 1. Toma, G. : Practical test-functions generated by computer algorithms, Lecture Notes Computer Science 3482 (2005), 576–585 2. Toma, C.: The advantages of presenting special relativity using modern concepts, Balkan Physics Letters Supplement 5 (1997), 2334–2337 3. D’Avenia, P., Fortunato, D., Pisani, L. : Topological solitary waves with arbitrary charge and the electromagnetic field, Differential Integral Equations 16 (2003) 587–604 4. Cattani, C.: Harmonic Wavelets towards Solution of Nonlinear PDE, Computers and Mathematics with Applications, 50 (2005), 1191–1210 5. Toma, C. : An extension of the notion of observability at filtering and sampling devices, Proceedings of the International Symposium on Signals, Circuits and Systems Iasi SCS 2001, Romania 233–236 6. Rushchitsky, J.J., Cattani, C., Terletskaya, E.V.: Wavelet Analysis of the evolution of a solitary wave in a composite material, International Applied Mechanics, 40, 3 (2004), 311–318 7. Toma, C.: The possibility of appearing acausal pulses as solutions of the wave equation, The Hyperion Scientific Journal 4 1 (2004), 25–28 8. Cattani, C.: Harmonic Wavelet Solutions of the Schroedinger Equation, International Journal of Fluid Mechanics Research 5 (2003), 1–10 9. Toma, A., Pusca, St., Moraraescu, C.: Spatial Aspects of Interaction between HighEnergy Pulses and Waves Considered as Suddenly Emerging Phenomena, Lecture Notes Computer Science 3980 (2006), 839–847 10. Toma, Th., Morarescu, C., Pusca, St.: Simulating Superradiant Laser Pulses Using Partial Fraction Decomposition and Derivative Procedure, Lecture Notes Computer Science 3980 (2006), 771–779

Different Structural Patterns Created by Short Range Variations of Internal Parameters Flavia Doboga ITT Industries, Washington, U.S.A.

Abstract. This papers presents properties of spatial linear systems described by a certain physical quantity generated by a differential equation. This quantity can be represented by internal electric or magnetic field inside the material, by concentration or by similar physical or chemical quantities. A specific differential equation generates this quantity considering as input the spatial alternating variations of an internal parameter. As a consequence, specific spatial linear variations of the observable output physical quantity appear. It is shown that in case of very short range variations of this internal parameters, systems described by a differential equation able to generate a practical test-function exhibit an output which appears to an external observer under the form of two distinct envelopes. These can be considered as two distinct structural patterns located in the same material along a certain linear axis. Keywords: patterns, short range variations, internal parameters.

1

Introduction

This papers presents properties of spatial linear systems described by a certain physical quantity generated by a differential equation. This quantity can be represented by internal electric or magnetic field inside the material, by concentration or by similar physical or chemical quantities. A specific differential equation generates this quantity considering as input the spatial alternating variations of an internal parameter. As a consequence, specific spatial linear variations of the observable output physical quantity appear. It is shown that in case of very short range variations of this internal parameters, systems described by a differential equation able to generate a practical test-function exhibit an output which appears to an external observer under the form of two distinct envelopes. These can be considered as two distinct structural patterns located in the same material along a certain linear axis. In the ideal mathematical case, suddenly emerging pulses should be simulated using test-functions (functions which differ to zero only on a limited time interval and possessing an infinite number of continuous derivatives on the whole real axis. However, as shown in [1], such test functions, similar to the Dirac functions, can’t be generated by a differential equation. The existence of such an equation of evolution, beginning to act at an initial moment of time, would imply the necessity for a derivative of certain order to make a jump at this initial moment Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1060–1066, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Different Structural Patterns Created by Short Range Variations

1061

of time from the zero value to a nonzero value. But this aspect is in contradiction with the property of test-functions to have continuous derivatives of any order on the whole real axis, represented in this case by the time axis. So it results that an ideal test-function can’t be generated by a differential equation. For this reason, the analysis must be restricted at practical test-functions [2], defined as functions which differ to zero on a certain interval and possess only a finite number of continuous derivatives on the whole real axis. Mathematical methods based on difference equations are well known [3], but for a higher accuracy of the computer simulation specific Runge-Kutta methods in Matlab are recommended. The physical aspects of dynamical systems able to generate spatial practical test-functions will be studied, for the case when the free-term of the differential equation (corresponding to the internal parameter of material) is represented by alternating functions. The shape of the output signal (obtained by numerical simulations in Matlab based on Runge-Kutta functions) will be analyzed, being shown that for very short range alternating inputs an external observer could notice (in certain condition) the existence of two distinct envelopes corresponding to two distinct structural patterns inside the material. Such aspect differs to the oscillations of unstable type second order systems studied using difference equations [4], and they differs also yo previous studies of the same author [5] where the frequency response of such systems to alternating inputs was studied (in conjunction with the ergodic hypothesis).

2

Equations Able to Generate Periodical Patterns

As it is known, a test-function on a spatial interval [a, b] is a function which is nonzero on this interval and which possess an infinite number of continuous derivatives on the whole real axis. For example, the function  exp ( x21−1 ) if x ∈ (−1, 1) ϕ(x) = 0 otherwise is a test-function on [−1, 1]. For a small value of the numerator of the exponent, a rectangular shape of the output is obtained. An example is the case of the function  if x ∈ (−1, 1) exp ( x0.1 2 −1 ) ϕ(x) = 0 otherwise Using the expression of ϕ(x) and of its derivatives of first and second order, a differential equation which admits as solution the function ϕ corresponding to a certain physical quantity can be obtained. However, a test-function can’t be the solution of a differential equation. Such an equation of evolution implies a jump at the initial space point for a derivative of certain order, and test-function must possess continuous derivatives of any order on the whole real axis. So it results that a differential equation which admits a test-function ϕ as solution can generate only a practical test-function f similar to ϕ, but having a finite number of continuous derivatives on the real Ox axis. In order to do this, we must add

1062

F. Doboga

initial conditions for the function f (generated by the differential equation) and for some of its derivatives f (1) , and/or f (2) etc. equal to the values of the testfunction ϕ and of some of its derivatives ϕ(1) , and/or ϕ(2) etc. at an initial space point xin very close to the beginning of the working spatial interval. This can be written under the form (2) (2) fxin = ϕxin , fx(1) = ϕ(1) xin and/or fxin = ϕxin etc. in

(1)

If we want to generate spatial practical test-functions f which are symmetrical as related to the middle of the working spatial interval, we can choose as space origin for the Ox axis the middle of this interval, and so it results that the function f should be invariant under the transformation x → −x Functions invariant under this transformation can be written in the form f (x2 ), (similar to aspects presented in [2]) and so the form of a general second order differential equation generating such functions must be   d2 f  2  df   a2 x2 + a0 x2 f = 0 2 + a1 x 2 2 dx d (x )

(2)

However, for studying the generation of structural pattersns on such a working interval, we must add a free-term, corresponding to the internal parameter of the material (the cause for the variations of the external obersvable physical quntity). Thus, a model for generating a practical test-function using as input the internal parameter u = u(x), x ∈ [−1, 1], is   d2 f   df   + a1 x2 + a0 x2 f = u a2 x2 2 2 dx d (x2 )

(3)

subject to lim f k (x) = 0 for k = 0, 1, . . . , n.

x→±1

(4)

which are the boundary conditions of a practical test-function. For u represented by aletrnating functions, we should notice periodical variations of the external observable phusical quantity f .

3

Periodical Patterns of Spatial Structures Descibed by Practical Test-Functions

According to previous considerations for the form of a differential equation invariant at the transformation x → −x a first order system can be written under the form df =f +u d (x2 )

(5)

Different Structural Patterns Created by Short Range Variations

1063

which converts to df = 2xf + 2xu dx

(6)

representing a first order dynamical system. For a periodical input (corresponding to the internal parameter) u = sin 10x, numerical simulations performed using Runge-Kutta functions in Matlab present an output of an irregular shape (figure 1), not suitable for joining together the outputs for a set of adjoining linear intervals (the value of f at the end of the interval differs in a significant manner to the value of f at the beginning of the interval). A better form for the physical quantity f is obtained for variations of the internal parameter described by the equation u = cos 10x. In this case the output is symmetrical as related to the middle of the interval (as can be noticed in figure 2) and the results obtained on each interval can be joined together on the whole linear spatial axis, without any discontinuities to appear. The resulting output would be represented by alternances of two great oscillations (one at the end of an interval and another one at the beginning of the next interval) and two small oscillations (around the middle of the next interval).

0.4

0.35

0.3

0.25

0.2

0.15

0.1

0.05

0

−0.05 −1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Fig. 1. f versus distance for first order system, input u = sin(10x)

Similar results are obtained for an undamped dynamical system first order, represented by df =u (7) d (x2 ) which is equivalent to df = 2xu dx

(8)

1064

F. Doboga

0.25

0.2

0.15

0.1

0.05

0

−0.05

−0.1 −1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Fig. 2. f versus distance for first order system, input u = cos(10x)

0.005

0

−0.005

−0.01

−0.015

−0.02

−0.025

−0.03

−0.035

−0.04 −1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Fig. 3. f versus distance for first order system, input u = sin(100x)

4

Connection with the Ergodic Hypothesis

When the internal parameter presents very short range variations, some new structural patterns can be noticed. Considering an alternating input of the form u = sin(100x), it results an observable physical quantity f represented in figure 3; for an alternating cosine input representd by u = cos(100x), it results the output f represented in figure 4. Studying these two graphics, we can notice the presence of two distinct envelopes. Their shape depends on the phase of the

Different Structural Patterns Created by Short Range Variations

1065

0.03

0.025

0.02

0.015

0.01

0.005

0

−0.005

−0.01 −1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Fig. 4. f versus distance for first order system, input u = cos(100x)

input alternating component (the internal parameter), as related to the space origin. At first sight, an external observer could notice two distinct functions f inside the same material, along the Ox axis. These can be considered as two distinct structural patterns located in the same material, generated by a short range alternating internal parameter u through a certain differential equation (invariant at the transformation x → −x).

5

Conclusions

This papers has presented properties of spatial linear systems described by a certain physical quantity generated by a differential equation. A specific differential equation generates this quantity considering as input the spatial alternating variations of an internal parameter. As a consequence, specific spatial linear variations of the observable output physical quantity appear. It was shown that in case of very short range variations of this internal parameters, systems described by a differential equation able to generate a practical test-function exhibit an output which appears to an external observer under the form of two distinct envelopes. These can be considered as two distinct structural patterns located in the same material along a certain linear axis. By this study, a fundamental new interpretation based on spatial aspects has been obtained for graphics previously obtained for non-linear equations of evolution [5] (the novelty of this study being justified). Acknowledgment. This research work was guided by Cristian Toma (Politehnica University, Bucharest) and Carlo Cattani (University of Salerno, Italy) through a pilot grant of international research involving Politehnica University,

1066

F. Doboga

Salerno University, IBM India Labs and Shanghai University - supported by the National Commission of Romania for UNESCO.

References 1. Toma, C. : An extension of the notion of observability at filtering and sampling devices, Proceedings of the International Symposium on Signals, Circuits and Systems Iasi SCS 2001, Romania 233–236 2. Toma, G. : Practical test functions generated by computer algorithms, Lecture Notes Computer Science vol. 3482 (2005), 576–584 3. Dzurina, J. : Oscillation of second order differential equations with advanced argument, Math. Slovaca, 45 3 (1995) 263–268 4. Zhang, Zh., Ping, B., Dong, W. : Oscillation of unstable type second order nonlinear difference equation, Korean J. Computer and Appl. Math. 9 1 (2002) 87–99 5. Doboga, F., Toma, G., Pusca, St., Ghelmez, M., Morarescu, C. : Filtering Properties of Practical Test Functions and the Ergodic Hypothesis, Lecture Notes Computer Science vol. 3482 (2005), 563–568

Dynamic Error of Heat Measurement in Transient Fang Lide, Li Jinhai, Cao Suosheng, Zhu Yan, and Kong Xiangjie The Institute of Quality and Technology Supervising, Hebei University, Baoding, China, 071051 [email protected]

Abstract. According to EN1434 (European Standard) and OIML R-75(International Organization of Legal Metrology), there are two methods in heat measurement, and they are all steady-state methods of test, using them in transient, obvious error will be produced, so some accurately measuring functions should be seeking. In a previous published paper of the author, a transient function relationship of heat quantity and time is deduced, and the validity of this function is proved through experimentation, also it is simplified reasonably, so it can be used in variable flow rate heating system. In this study, a comparison with steady-state method and dynamic method is presented, the errors exist in the steady state method are analyzed. A conclusion is improved that the steady-state methods used in variable flow heating system will result in appreciable errors, the errors only exist in transient, when system reach steady state, the errors disappear, moreover, the transient time is long in heating system, it is at least 30 minutes, so it is necessary to take some measures to correct them, however, study showed that the error can be ignored when the flow rate step change is less than 5kg/h. Keywords: Dynamic heat meter, variable flow heating system, step change, flux.

1 Introduction The accuracy of measurement model is a continuous goal in measurement instrument designing, moreover, in last decades saving the energy has become an important issue, due to environmental protection and economical reasons. In every year a significant part of the energy is utilized for heating. So the primary concern of heating services today is to accurately measure and charge the heat consumption. According to EN1434 (European Standard) and OIML R-75(International Organization of Legal Metrology), there are two methods in heat measurement, and they are all steady-state methods of test, Theory give the two methods as τ2

Q = ∫ GΔhdτ τ1

v2

Q = ∫ k ΔθdV v1

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1067–1074, 2007. © Springer-Verlag Berlin Heidelberg 2007

(1) (2)

1068

L. Fang et al.

where: Q is the quantity of heat given up; G is the mass flow rate of the heat-conveying liquid passing through the heat meter; Δh is the difference between the specific enthalpies of the heat-conveying liquid at the flow and return temperatures of the heat-exchange circuit; t is time; V is the volume of liquid passed; k called the heat coefficient, is a function of the properties of the heat-conveying liquid at the relevant temperatures and pressure; Δθ is the temperature difference between the flow and return of the heat exchange circuit[1,2]. Usually, equation (1) called enthalpy difference method and equation (2) called K coefficient method; they are widely used in all kinds of heat meters [3-8]. Howeve r, when the flow rate of system have a step change, heat dissipating capacity is a function of flow rate, temperature and time, and, if the transient time interval between two steady states is long enough, huge error would be produced by using steady equation (1) or (2). During the author study on a new heating measurement and control system, a function relationship of heat quantity and time is deduced, and the validity of the function is proved through experimentation, and it is simplified reasonably [9], so it can be used in variable flow heating system, this article presented a comparison of state heat measurement method and dynamic heat measurement method.

2 Principle of Dynamic Heat Meter When system flux generated step change, the author put forward a “level-move” presumption of temperature distribution curve in radiator during transient. On the basis of the presumption, the function of heat quantity and time is deduced, and validity of the function is proved through experimentation, the relativity error of the function was within 3%. The dynamic heat measurement function in transient after flux changing to large can be seen as Equation (3), and the opposite function can be expressed as equation (4)[9,10,11,12].

Q0−1 = cG0 (t g − t n )(1 − e −b0 FC ) − ( t g − t n ).( K 0 + ( K 1 − K 0 )

τ ) τ s1

b0

(t g − t n ).( K 0 + ( K 1 − K 0 ) b1

τ ) τ s1

(e

− b 0 ( FC −

( e − b1 FC − e

τ . a 0 −1 ) τ s1

− b1

τ FC τ s1

−e

− b0 (

τ τ FC − . a 0 −1 ) τ s1 τ s1

).......( 0 ≤ τ ≤ τ s1 )

Q1−0 = cG0 (t g − t n )(1 − e −b0 FC ) + ( t g − t n ).( K 1 − ( K 1 − K 0 ) b0

τ ) τ s1

)+

( e − b 0 FC − e

− b0

τ FC τ s1

)

(3)

Dynamic Error of Heat Measurement in Transient

( t g − t n ).( K 1 − ( K 1 − K 0 ) −

b1

τ ) τ s1

(e

− b1 ( FC +

τ . a1 − 0 ) τ s1

−e

− b1 (

τ τ FC + . a1 − 0 ) τ s1 τ s1

)

(4)

.......(0 ≤ τ ≤ τ s 0 ) Where

1069

a 0−1 , the max right parallel moving distance; a1−0 , the max left parallel



b0 , b1 coefficient ( m −1 ); c —mass specific heat of water. FC —total area of radiator m2 . G0 — flux before step change. G1 — flux after step change. K —heat transfer coefficient of radiator(W/m2. ). Q0−1 —dynamic heat quantity of radiator in flux largen transient W . Q1− 0 —dynamic heat quantity of radiator in flux diminution transient W . t n —average room temperature.

moving distance;

. tg

( )



( ) ( ) (℃) —feed water temperature of radiator(℃). τ —evacuation time of the medium in s1

the flow rate G1 . τ s 0 —evacuation time of the medium in the flow rate G0 .

These two equations can be applicable not only for dynamic course, but also for steady course. Let τ 0 in steady course.



3 Dynamic Heat Measurement Error of Heat Meter in Transient Compared equation (3),(4) with equation (1)(or(2)) in different conditions, heat measurement error in transient can be acquired. The follow analyzing is based on the assumption that heat dissipating capacity of pipeline is too little to be considered, water supply temperature t g is invariable. 3.1 Errors in Same Initial Flow Rate with Different Step Change The results presented in Fig.1,2. The curves showed that Eq. (1) used in transient can produce very huge errors. When system flow rate vary large from G0 to G1 , maximum error exist in the beginning of the transient, for this moment, the feed water temperature become rising, but the backwater temperature remain unchanged, so the temperature difference rising, on the same time, system flow rate vary large, all this factors caused calculating value of heat dissipating capacity with Eq. (1) larger than Eq. (3). With the time gone, the error become small with the backwater temperature rising. At τ = τ S 1 , the mass of system finish updating, backwater temperature reach its real level, the error disappeared. When system flow rate vary small from

G1 to G0 ,

maximum error exist in the beginning of the transient too, and it is negative value, for this moment the feed water temperature is invariable, and the backwater temperature

1070

L. Fang et al.

Fig. 1. Comparison of average of relative error in step change 5 and 10

Fig. 2. Average of relative error in the same initial flow rate with different step change

also remain unchanged at the beginning, so the temperature difference is invariable, on the same time, system flow rate vary small, all this factors caused calculating value of heat dissipating capacity with Eq. (1) less than Eq.(4). With the time gone, the errors become small with the backwater temperature descending. At τ = τ S 0 , the mass of system finish updating, backwater temperature reach its real level, the error disappeared. Moreover, different step change has different average of relative error, the larger step change, and the more average of relative error. Fig.1 only showed two step changes (5 and 10), more test data and comparison presented in Fig.2.

Dynamic Error of Heat Measurement in Transient

3.2 Errors in Same Step Change with Different Initial Flow Rate

1071

G0

The result showed in Fig.3 to Fig6.

Fig. 3. Average of relative error in different initial flow rate and different step change (flow rate vary to large)

Fig. 4. Average of relative error in different initial flow rate and different step change (flow rate vary to small)

It is clear from all the results that average of relative errors increase with increasing step change. At the same step change, absolute average of relative errors is descending with the initial flow rate G 0 changing to large no matter in the transient of system flow

1072

L. Fang et al.

rate changing to large or small. Fig.3,4 showed average of relative errors have the same trend in different step change; Average of relative error tends to unity at same step change ratio, so all data almost exhibits one line in Fig.5. In flow rate large to small transient, all data exhibits most curvature, particularly at low initial flow rate, see Fig.6. These pictures would provide some valuable reference for heat measurement function correlations. Two ways may be concerned in correcting the errors, one is to use dynamic equations in transient, the other is to use steady equation multiply a coefficient which is the function of step change ratio.

Fig. 5. Average of relative error in different step change ratio and different initial flow rate (flow rate vary to large)

Fig. 6. Average of relative error in different step change ratio and different initial flow rate (flow rate vary to small)

Dynamic Error of Heat Measurement in Transient

1073

4 Conclusion From discussing above, The obvious error will be produced if heat dissipating capacity of heat system is calculate by using steady-state equation (1) after step change of flow rate. In a fixed original flow rate, As the flow rate step change is small the relative error corresponding this condition is small too, and otherwise, it is large. In a fixed step change, average relative errors decreasing with the initial flow rate increasing. These kinds of errors only exist in the transient, and a part of it can be counteracted because the sign of the relative error is reverse during the two process of flow rate increasing and decreasing. According to calculation data shown in the paper, the error can be ignored when the flow rate step change is less than 5 kg/h, else it must be corrected. The errors mentioned above only exist in transient, when system reach steady state, the errors disappear, however, the transient time is long in heating system, it is at least 30 minutes, so it is necessary to take some measures to correct them. To correcting the errors, some program should be added in system’s software, so that the heat meter can have some functions like step change discrimination, timing. In brief, heat meter should have the function to judge the quantity of the flow rate step change, so that using different formula in different working state. In brief, two ways may be concerned in correcting the errors, one is to use dynamic equations in transient, the other is to use steady equation multiply a coefficient which is the function of step change.

References [1] Europeanstandard, EN1434:1997has the status of a DIN standard, Heatmeters,(1997) [2] OIML-R75 International Recommendation, Heatmeters,(2002) [3] SHOJI KUSUI and TETSUO NAGAI: An Electronic Integrating Heat Meter, IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT. VOL 39 NO S. OCTOBER,pp785-789,1990 [4] Géza Móczár,Tibor Csubák and Péter Várady: Distributed Intelligent Hierarchical System for Heat Metering and Controlling, IEEE Instrumentation and Measurement Technology Conference,Budapest,Hungary,pp2123-2128,May 21-23,2001. [5] Géza Móczár, Tibor Csubák, and Péter Várady: Student Member, IEEE, Distributed Measurement System for Heat Metering and Control, IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 51, NO. 4, pp691-694, AUGUST 2002. [6] Jin Hai-long; Pan Yong: Development and research of a new style intelligent heat meter, Chinese Journal of Sensors and Actuators vol.18, no.2 : 350-352, 2005 [7] Hao LN, Chen H, Pei HD: Development on a new household heat meter, ISTM/2005: 6TH INTERNATIONAL SYMPOSIUM ON TEST AND MEASUREMENT, VOLS 1-9, CONFERENCE PROCEEDINGS : 3090-3092, 2005 [8] Ye Xian-ming; Zhang Xiao-dong: Design on intelligent heat meter, Instrument Techniques and Sensor no.1: 10-12, 2005 [9] Fang Lide: Study on dynamic character of a new heating measurement and control system, Master’s degree dissertation of Hebei University of Technology, pp18-36, 2005.3

1074

L. Fang et al.

[10] Fang Lide, Li Xiaoting, Li Jinhai, etc: Analyze Dynamic Errors of heat meter after step change of Flow rate, Electrical Measurement & Instrumentation (J), pp20-24, .2005.9 [11] Toma, G: Practical Test Functions Generated by Computer Algorithms, Lecture Notes Computer Science 3482 (2005), 576—585 [12] Toma, C: An Extension of the Notion of Observability at Filtering and Sampling Devices, Proceedings of the International Symposium on Signals, Circuits and Systems Iasi SCS 2001, Romania, 233--236

Truncation Error Estimate on Random Signals by Local Average Gaiyun He1 , Zhanjie Song2, , Deyun Yang3 , and Jianhua Zhu4 School of Mechanical Engineering, Tianjin University, Tianjin 300072, China [email protected] 2 School of Science, Tianjin University, Tianjin 300072, China [email protected] 3 Department of Information Science, Taishan College, Taian 271000, China [email protected] 4 National Ocean Technique Center, Tianjin 300111, China [email protected] 1

Abstract. Since signals are often of random characters, random signals play an important role in signal processing. We show that the bandlimited wide sense stationary stochastic process can be approximated by Shannon sampling theorem on local averages. Explicit truncation error bounds are given. Keywords: stochastic process, random Signals, local averages, truncation error, Shannon sampling theorem.

1

Introduction and the Main Result

The Shannon sampling theorem plays an important role in signal analysis as it provides a foundation for digital signal processing. It says that any bandlimited function f , having its frequencies bounded by πW, can be recovered from its sampled values taken at instances k/W, i.e. f (t) =

+∞  k=−∞

 f

k W

 sinc(Wt − k),

(1)

where sinc(t) = sinπt/(πt), t = 0, and sinc(0) = 1. This equation requires values of a signal f that are measured on a discrete set. However, due to its physical limitation, say the inertia, a measuring apparatus may not be able to obtain exact values of f at epoch tk for k = 0, 1, 2, · · ·. Instead, what a measuring apparatus often gives us is a local averages of f near tk for each k. The sampled values defined as local averages may be formulated by the following equation 

Corresponding author. Supported by the Natural Science Foundation of China under Grant (60572113, 40606039) and the Liuhui Center for Applied Mathematics.

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1075–1082, 2007. c Springer-Verlag Berlin Heidelberg 2007 

1076

G. He et al.

 f, uk  =

f (x)uk (x)dx

(2)

for some collection of averaging functions uk (x), k ∈ ZZ, which satisfy the following properties,  σ σ uk (x)dx = 1. (3) supp uk ⊂ [xk − , xk + ], uk (x) ≥ 0, and 2 2 Where uk for each k ∈ ZZ is a weight function characterizing the inertia of measuring apparatus. Particularly, in an ideal case, the function is given by Dirac δ-function, uk = δ(· − tk ), because f, uk  = f (tk ) is the exact value of tk . The local averaging method in sampling was studied by a number of papers [1]- [6] form 1994 to 2006. The associated truncation error of (1) is defined dy   +N  k f (k/W) sin πWt  f . (4) = f (t) − f (−1)k RN sinc(Wt − k) = W π Wt − k |k|>N

k=−N

But on the one hand, we can not finish a infinite number of terms in practise, we only approximated signal functions by a finite number of terms. Which is called truncation error deal with bounds by a number of papers [7]- [15]. On the other hand, since signals are often of random characters, random signals play an important role in signal processing, especially in the study of sampling theorems. For example, a signal of speech, where the random portion of the function may be white noise or some other distortion in the transmission channel, perhaps given via a probability distribution. So there are a lots of papers on this topic too. Such as [16]-[24]. Now we give truncation error bounds random signals by local averages. Before stating the results, let us introduce some notations. Lp (IR) is the space of all measurable functions on IR for which f p < +∞, where  f p :=

1/p

+∞

−∞

|f (u)|p du

f ∞ := ess sup |f (u)|,

,

1 ≤ p < ∞,

p = ∞.

u∈IR

BπW,p is the set of all entire functions f of exponential type with type at most πW that belong to L2 (IR) when restricted to the real line [25]. By the PaleyWiener Theorem, a square integrable function f is band-limited to [−πW, πW] if and only if f ∈ BπW,2 . Given a probability space (Ω, A, P) [26] , a real-valued stochastic process X(t) := X(t, ω) defined on IR × Ω is said to be stationary in weak sense if E[X(t)2 ] < ∞, ∀t ∈ IR, and the autocorrelation function  RX (t, t + τ ) := X(t, ω)X(t + τ, ω)dP (ω) Ω

is independent of t ∈ IR, i.e., RX (t, t + τ ) = RX (τ ).

Truncation Error Estimate on Random Signals by Local Average

1077

A weak sense stationary process X(t) is said to be bandlimited to an interval [−πW, πW] if RX belongs to BπW,p for some 1 ≤ p ≤ ∞. Now we assume that uk which are given by (3) satisfy the following properties. i)

supp uk ⊂ [ constants;

ii)

uk (t) ≥ 0,

k k − σk , + σk ], where σ/4 ≤ σk , σk ≤ σ/2, σ are positive W W

 uk (t)dt = 1; 

iii) m = inf {mk }, where mk := k∈Z

k/ W +σ/4

k/ W −σ/4

uk (t)dt.

(5)

In this cases, The associated truncation error of random signals X(t, ω) is defined dy +N  X RN = X(t) − X, uk  sinc(Wt − k). (6) k=−N

where the autocorrelation function of the weak sense stationary stochastic process X(t, ω) belongs to BπW,2 , and W > W > 0. The following results is proved by Belyaev and Splettst¨ osser in 1959 and 1981, respectively. Proposition A. [16, Theorem 5]) If the autocorrelation function of the weak sense stationary stochastic process X(t, ω) belongs to BπW,2 , for W > W > 0, we have ⎛

2 ⎞   N



  X∗ 2  k



X E |RN | = E ⎝ X(t, ω) − , ω sinc(W t − k) ⎠



W k=−N



16RX (0)(2 + |t| W)2 . π 2 (1 − W/ W)2 N 2

(7)

Proposition B. [17, Theorem 2.2]) If the autocorrelation function of the weak sense stationary stochastic process X(t, ω) belongs to Bπ W,p for some 1 ≤ p ≤ 2 and Ω > 0, then ⎛

2 ⎞   N



 k



lim E ⎝ X(t, ω) − (8) X , ω sinc(W t − k) ⎠ = 0. N →∞



W k=−N

For this case, we have the following result. Theorem C. If the autocorrelation function RX of a weak sense stationary stochastic process X(t, ω) belongs to BW,2 , for W > W > 0 and 2/δ ≥ N ≥ 100, we have   2  X 2 ln N 32RX (0)(2 + |t| W)2  , (9) E |RN | ≤ 14.80R ∞ + π 2 (1 − W/ W)2 N where {uk (t)} is a sequence of continuous functions defined by (5).

1078

G. He et al.

Or, in other words E

2



X 2 | |RN



 =O

ln N N

2 N → ∞.

,

(10)

Proof of the Main Result

Let us introduce some preliminary results first. Lemma D. [27] One has for q  > 1, 1/p + 1/q  = 1, and W > 0,  q ∞   2 q < p . |sinc(W t − k)|q ≤ 1 +  π q −1

(2.1)

k=−∞

Lemma E. [26] If a stationary stochastic process X(t, ω), t ∈ [a, b] is continuous in mean square, f (t), g(t) are continuous function on [a, b], then

     b b b b E f (s)X(s)ds · g(t)X(t)dt = f (s)g(t)R(s − t)dsdt. (2.2) a

a

a

a

Lemma F. Suppose that the autocorrelation function RX of the weak sense stationary stochastic process X(t, ω) belongs to Bπ W,2 and W > 0, and satisfies  RX (t) ∈ C(IR). Let   j δ ; D W 2

       



j j j j := sup

RX −RX −δ ∗∗ − RX +δ ∗ + RX +δ ∗ −δ ∗∗

W W W W |δ∗ |≤ δ 2 |δ ∗∗ |≤ δ2

=



sup

δ

|δ∗ |≤

2

0



−δ ∗∗

0 δ∗

 RX





j + u + v dudv

. W

|δ ∗∗ |≤ δ2

Then we have for r, N ≥ 1,  2r +2N    jπ δ r δ  ; ≤ (4N + 1)(RX (t)∞ )r . D Ω 2 2 j=−2N

 Proof. Since RX is even and RX (t) ∈ C(IR), we have      r r +2N 2N   r   jπ δ jπ δ δ ; ; = D 0; +2 D D Ω 2 2 Ω 2 j=1 j=−2N  2r δ  r ≤ (4N + 1)(RX (t)∞ ) . 2

which completes the proof.

(11)

Truncation Error Estimate on Random Signals by Local Average

1079

Proof of Theorem C. From Proposition A, Proposition B and Lemma E we have ⎡

2 ⎤  k/ W +δk N



  X 2



uk (t)X(t, ω)dtsinc(W t − k) ⎦ E |RN | = E ⎣ X(t, ω) −



 k=−N k/ W −δk

  N

 k

= E X(t, ω) − X , ω sinc(W t − k)

W k=−N   N  k X + , ω sinc(W t − k) W k=−N

2 ⎤  k/ W +δk N



uk (t)X(t, ω)dtsinc(W t − k) ⎦ −

 k=−N k/ W −δk  X∗ 2  = 2E |RN | ⎡

2 ⎤ 

   kπ/Ω+δk N



k



+2E ⎣

uk (t)X (t, ω) dt sinc(W t − k) ⎦ X ,ω −



 W k/ W −δk k=−N

N      

 δk  X∗ 2  k k

X , ω uk + s ds = 2E |RN | + 2E

 W W −δk k=−N

2 ⎤   δk



uk (k/ W +t)X (kπ/Ω + t, ω) dt sinc(W t − k) ⎦ −

 −δ k

N   X∗ 2  | +2 = 2E |RN



N 

k=−N j=−N

       k j j k E X ,ω X , ω uk ( + u)uj + v dudv   W W W W −δk −δk         δk  δk   k j k j E X − ,ω X + v, ω uk + u uj + v dudv   W W W W −δk −δk         δk  δk   k j k j E X − + u, ω X , ω uk + u uj + v dudv   W W W W −δk −δk          δk  δk   k j k j E X +u, ω X +v, ω uk +u uj +v dudv +   W W W W −δk −δk  δk



 δk

·|sinc(W t − k)||sinc(W t − j)|   N  δk N    X∗ 2  = 2E |RN | +2

   (k − j) (k − j) − RX −v   W W k=−N j=−N −δk −δk         (k − j) (k − j) k j −RX +u +RX +u−v uk +u uj +v dudv W W W W  δk





RX

1080

G. He et al.

·|sinc(W t − k)||sinc(W t − j)|   N  δk N    X∗ 2  ≤ 2E |RN | + 2 k=−N j=−N



 −δk



 δk

 −δk

D

(k − j) δ ; 2 W



   k j + u uj + v dudv · |sinc(W t − k)||sinc(W t − j)| uk W W   N N    X∗ 2  (k − j) δ ; D |sinc(W t − k)||sinc(W t − j)| = 2E |RN | + 2 2 W k=−N j=−N Using H¨older’s inequality and Lemma D, we have N 

N 

 D

k=−N j=−N

(k − j) δ ; 2 W

 |sinc(W t − k)| · |sinc(W t − j)|



p∗ ⎞1/p∗



  N N 

⎟ (k − j) δ ⎜

D ; ≤⎝ |sinc(W t − j)|



2 W

k=−N j=−N ⎛



1/q∗

N 

·

|sinc(W t − k)|q



k=−N





p∗ ⎞1/p∗



  N N 

⎟ (k − j) δ ∗ 1/q∗ ⎜

≤ (p ) D ; |sinc(W t − j)|

⎠ ⎝

2 W

k=−N j=−N ⎛



p∗ ⎞1/p∗

N

 



⎟ (k − j) δ ∗⎜

≤p ⎝ D , ; |sinc(W t − j)|



2 W

k=−N j=−N N 

where 1/p∗ + 1/q ∗ = 1. By Hausdorff-Young inequality [28, page176] and Lemma F, we have

p∗ ⎞1/p∗



N  



 (k − j) δ ⎜

|sinc(W t − j)|

⎠ D ; ⎝

2 W

k=−N j=−N ⎛

N 

⎛ ≤⎝

2N  

 D

j=−2N

≤ (4N + 1)1/r

j δ ; W 2

r∗

⎞1/r∗ ⎛ ⎠

⎝ ⎛



2N  j=−2N

⎞1/s∗ ∗

|sinc(W t − j)|s ⎠

⎞1/s∗  2 2N  ∗ δ  ⎝ RX (t)∞ |sinc(Ωt − jπ)|s ⎠ 2 j=−2N

Truncation Error Estimate on Random Signals by Local Average



 ≤ (4N + 1)1/r RX (t)∞



1 N

2

⎛ ⎝

∞ 

1081

⎞1/s∗ ∗

|sinc(Ωt − jπ)|s ⎠

,

j=−∞

where 0 ≤ 1/s∗ + 1/r∗ − 1 = 1/p∗ . Let r∗ = ln N/2. Notice that N ≥ 100, we have



(4N + 1)1/r
0

Spatial Coordinate Discrete to the Long Waves in Shallow Water

Identically proportion of an x-coordinate of spatial, Δ is the grid spacing. Δx = (b − a) / N where N is the total computational narrow bandwidth [a, b]. The {xi = a + (i − 1)Δx} i=1, 2, 3……N+1 refers to discrete sampling pints centered

, (



A Numerical Solutions Based on the Quasi-wavelet Analysis

around the point x, and xi − xi + k = kΔx respect to the coordinate at grid point

1085

, the function values {u } and {v } with i

i

xi , the Eq.(1)-(2) are represented as follows

∂ui ∂u ∂v 1 ∂ 2ui =− + ui i + i 2 ∂x 2 ∂t ∂x ∂x ∂ vi ∂ vi ∂ui 1 ∂ 2 vi = + ui + vi ∂t ∂x ∂x 2 ∂x 2

(5) (6)

Let fi = −

gi =

(i=1, 2, 3, ……..N+1)

∂ u i ∂ vi 1 ∂ 2ui + ui + ∂x ∂x 2 ∂x 2

(7)

∂vi ∂ui 1 ∂ 2 vi + ui + vi ∂x ∂x 2 ∂x 2

(8)

By (5)-(8), Eq.(5)-(6)can be expressed as follows ⎧ du i ⎪⎪ dt = f i ⎨ ⎪ dv i = g i ⎪⎩ dt

(9) (10)

2.2 Numerical Discrete Forms of Quasi-wavelet Spatial Coordinate Discrete to the Long Waves in Shallow Water To solve Eq.(9)-(10), the regularized Shannon’s delta kernel is used, which can dramatically increase the regularity of Shannon’s wavelet scaling function,[or quasiscaling function][8]. δ Δ, σ ( x ) =

sin(π x / Δ ) exp[ − x 2 / 2σ 2 ] πx/Δ

(11)

where Δ is the grid spacing, σ is determines the width of the Gauss envelop and can be varied in association with the grid space. i.e., σ = rΔ and r ≥ 2 / π r is the parameter. The regularized Shannon’s wavelet is called quasi-wavelet. Shannon’s scaling function is recognized as basis function, the function f ( x ) and g ( x ) in the interval ⎡ π π ⎤ can be expressed as





⎢⎣ − Δ Δ ⎥⎦

f ( x) =

+∞

∑δ

k =−∞

Δ ,σ

( x − xk ) f ( xk )

(12)

+∞

g(x) = ∑ δΔ,σ (x − xk )g(xk ) k =−∞

(13)

1086

Z.H. Huang, L. Xia, and X.P. He

In discrete singular convolution algorithm, the band-limits f (x ) , g ( x ) and its derivatives with respect to the coordinate at grid point x are approximated by linear sun of discrete values { f ( x k )} and {g ( x k )} in the interval ⎡ − π , π ⎤ ⎢⎣

(n)

f

(x) ≈

w



k=−w

g (n) ( x) ≈

+w

∑δ

l =− w

Δ ⎥⎦

Δ

δ Δ( n,ϖ) ( x − x k ) f ( x k ) , ( n = 0 ,1, 2 ......) (n) Δ ,σ

( x − x k ) g ( x k ) , ( n = 0,1, 2..........)

(14) (15)

In fact, the 2w point of computation is obtained in the coordinate around a grid point {xi } ; 2w+1 is the total computational bandwidth, which is usually much smaller than the computational domain. Eq.(15)-(16) are called quasi-wavelet form of numerical discrete. To compute Eq.(14)-(15), the regularized Shannon’s delta kernel is used, the delta expressions for (14)-(15) δ Δ and (1)

δ Δ( 2) can be given analytically as

λΔ ⎧ κ λΔ ) ⎪μ ( − 2 − πσ 2 x πx ⎪⎩0

x≠0

δ Δ(1,σ) ( x) = ⎨

x=0

π

2Δ Δ ⎧ ⎪⎪ μ [ λ ( − Δ x + π x 3 + π xσ δ Δ( 2,σ) ( x ) = ⎨ 2 2 2 ⎪− 3 + π σ / Δ 2 ⎪⎩ 3σ

2

+

Δx

πσ

4

) − 2κ (

1 1 + 2 )] , x ≠ 0 2 x σ ,x = 0

Where μ = exp[− x 2 / 2σ 2 ] , λ = sin(πx / Δ) and κ = cos(πx / Δ) . 2.3 Temporal Derivatives Discretization A Rung-Kutta schema is used for temporal derivatives. The ordinary differential Eqs. (9)-(10) are used by fourth-order Rung-Katta method to discretize temporal derivatives, which can be expressed as follows

uin +1 = uin +

Δt [ K i ,1 + 2 K i ,2 + 2 K i ,3 + K i ,4 ] 6

(i = 1,2, 3,......N = 1 )

(16)

where

K i ,1 = f i ,1n

v nj =1 = v nj +

L j ,1 = g nj ,1

K i ,2 = f i ,2n

K i ,3 = f i ,3n

K i ,4 = f i ,4n

(i= 1, 2, ...,N+1)

Δt [ L j ,1 + 2 L j ,2 + 2 L j ,3 + L j ,4 ] 6 L j ,2 = g nj ,2 L j ,3 = g nj ,3 L j ,4 = g nj ,4 (j=1, 2,...,N+1)

Where upper sign n is time level, Δ t is length of time. From Eq.(16),we have

(17) (18) (19)

A Numerical Solutions Based on the Quasi-wavelet Analysis w 1 w δ Δ( 2,σ) ( − m Δ x )u mn + i + u in ∑ δ Δ(1),σ ( − m Δ x )u in+ m ∑ 2 m=− w m=− w

K i ,1 = f i ,1n = −

w

+



m=−w

K i,2 = f

n i,2

w

∑δ

×

m=−w

(1 ) Δ ,σ

K i ,3 = f w

×



m=− w

w



m =− w

w

∑δ

m=−w

n i ,3

m=− w

Δt K i + m ,1 ] + 2

(20)

Δt Δt K i + m ,1 ] + [ u in + K i ,1 ] + 2 2 w

∑δ

m=−w

(1 ) Δ ,σ

( − m Δ x )[ v in+ m +

Δt L i + m ,1 ] 2

1 =− 2

w



m=−w

w Δt Δt K i + m ,2 ] + ∑ δ Δ(1),σ ( − m Δ x )[ vin+ m + Li + m ,2 ] 2 2 m=− w

(21)

(22)

δ Δ( 2,σ) ( − m Δ x )[ u mn + i + Δ tu i + m , 2 ] + [ u in + Δ tK i ,3 ]

δ Δ(1),σ ( − m Δ x )[ u in+ m + Δ tK i + m ,3 ] +

L j ,3 = g nj,3 (1) Δ ,σ

n m+i

1 w Δt Δt = − ∑ δ Δ( 2,σ) ( − m Δ x )[u mn + i + K i + m ,2 ] + [u in + K i ,2 ] 2 m=− w 2 2

w

w



m =−w

δ Δ(1),σ ( − m Δ x )[ v in+ m + Δ tL i + m ,3 ]

w

1 2

w



m=− w

δ Δ( 2,σ) ( − m Δ x )[ v nj + m +

(23)

w

1 δ Δ(1),σ (−mΔx)vnj +m + vnj ∑ δ Δ(1),σ (−mΔx)vnj +m ∑ δΔ(2),σ (−mΔx)vmn + j + unj m∑ 2 m=− w =− w m=− w L j , 2 = g nj , 2 =

w

( − m Δ x )[ u

(2) Δ ,σ

δ Δ(1),σ ( − m Δ x )[u in+ m +

L j ,1 = g nj ,1 =

∑δ

δ Δ(1),σ ( − m Δ x ) vin+ m

( − m Δ x )[ u in+ m +

K i , 4 = f i ,n4 ×

1 =− 2

1087

(24)

Δt L j + m ,1 ] 2

w Δt Δt + [ u nj + K j .1 ][ ∑ δ Δ(1),σ ( − m Δ x ) ×[ v nj + m ,1 + L j + m ,1 ] 2 2 m =− w w Δt Δt + [ v nj + L j ,1 ] ∑ δ Δ(1).σ ( − m Δ x )[ u nj + m + K j + m ,1 ] 2 2 m=−w 1 w Δt L j ,2 = g nj ,2 = L j + m ,1 ] δ Δ( 2,σ) ( − m Δ x )[ v nj + m + ∑ 2 m=−w 2 w Δt Δt + [ u nj + K j .1 ][ ∑ δ Δ(1),σ ( − m Δ x ) ×[ v nj + m ,1 + L j + m ,1 ] 2 2 m =− w w Δt Δt + [ v nj + L j ,1 ] ∑ δ Δ(1).σ ( − m Δ x )[ u nj + m + K j + m ,1 ] 2 2 m=− w 1 w Δt Δt L j + m , 2 ] + [u nj + K j .2 ] = ∑ δ Δ( 2,σ) ( − mΔx )[v nj + m + 2 m=− w 2 2

(25)

(26)

w Δt Δt Δt L j + m , 2 ] + [v nj + L j , 2 ] ∑ δ Δ(1.ω) (− mΔx )[u nj + m + K j + m , 2 ] (27) 2 2 2 m=− w 1 w = g nj, 4 = ∑ δ Δ( 2,σ) (− mΔx )[v nj + m + ΔtL j + m,3 ] 2 m=− w

(− mΔx ) × [v nj + m + L j ,4

w

+ [u nj + ΔtK j .3 ] ∑ δ Δ(1,σ) (−mΔx) × [v nj + m + ΔtL j + m ,3 ] m=−w w

+ [v nj + ΔtL j ,3 ] ∑ δ Δ(1.σ) (−mΔx)[u nj + m + ΔtK j + m ,3 ] m=−w

(28)

1088

Z.H. Huang, L. Xia, and X.P. He

When t=0

,the values of { u }

{ } (n=0)are obtained by Eq.(1)-(4). This

n i

n

and vi

can be rewritten as

u 0i = u ( x)

(i=1, 2, 3…N+1)

v = v ( x)

(j=1, 2, 3….…N+1)

0 j

Where [-w, +w] is the computation bandwidth. The w may be an arbitrary constant to reduce computation in narrower bandwidth.

3 Overall Solutions Scheme In the above, we have computed δ Δ(1,σ) and δ Δ( 2,σ) by only depending on the spacing

Δ , therefore, when the grid spacing is provided, the coefficients need to be computed only once and can be used during the whole computation. The main steps of computation can be expressed as following: 0

Step 1. using the known initial value both for ui (i=1, 2, 3…N+1) and ν 0j (j=1,2,3…... N+1) or time level values of previous time ui and ν n



n j

(i, j=1, 2, 3….

N +1 , and outside the computational domain are required extension. Step 2. From Eq.(20)-(26), transformations of the specified grid point values

f i ,n1 , f i ,n2 , f i ,n3 , f i ,n4 and g nj,1 , g nj, 2 , g nj,3 , g 4j , 4 are obtained. Step 3. By subtracting Eq(16)-(18) from second-step, the values are computed

u

n +1 i

n +1





and vi i, j=1, 2, 3……N +1 . Step 4. Repeating the above process, from the first-step to the third-step with being n +1

computational value u i

It satisfies the relation: achieved.

(i, j=1, 2, 3…N+1)and the boundary condition. t = t + Δt and n = n + 1 , until required time level is n +1

and vi

4 Comparison Computations Both quasi-wavelet numerical solutions and the analytical solutions are computed, by Eq.(1)-(2). Assuming that Eq.(1)-(2) satisfy the initial-boundary condition below u ( x, 0) =

c c (1 + ta n h x) 2 2

ν (x, 0) =

c2 c sec h 2 x 4 2

(29)

u (a, t ) =

c 1 c2 [1 + tanh ( ac + t )] 2 2 2

ν ( a ,t ) =

c2 1 c2 sec h 2 [ ac + t ] 2 2 2

(30)

u (b, t ) =

c 1 c2 [1 + tanh ( cb + t )] 2 2 2

ν (b , t ) =

c2 1 c2 [1 + tanh ( cb + t )] 2 2 2

(31)

We have analytical solutions to Eq.(1)-(2)

A Numerical Solutions Based on the Quasi-wavelet Analysis

u =

c 1 c2 [1 + tanh ( cx + t )] , 2 2 2

v=

1089

c2 1 c2 sec h 2 [ cx + t] 4 2 2

where c is an arbitrary constant. 0

0

To analyze and compare the computations, the initial values u i and v j of discrete are obtained from Eq.(29), where

u i0 =

c c {1 + tanh [ a + ( i − 1) Δ x ]} , 2 2

(i ,j=1,2,3……N+1)

v 0j =

c2 c sec h 2 [ a + ( j − 1) Δ x ] 4 2 n

n

We shall compute the values of previous time level from the above ui and vi

) ,



(i,

j=1,2,3……N+1 . We choose c=0.5 computational bandwidth W=10, orthonormal band σ = 3.2Δ computation domain[a, b]=[-100, 100], the number of grid N=200, allowable time step Δt = 0.002 . These values are computed by method of quasiwavelet, respectively, and such a plot is given in Figure 1-4. From Eq.(1)-(2), these figures are excellent agreement between the analytical solutions and quasi-wavelet numerical solutions.

Fig. 1. u-analytical solutions

Fig. 3. v- analytical solutions

Fig. 2. u-quasi-wavelet solution (where w=10, Δt=0.002,σ=3.2Δ)

Fig. 4. v-quasi-wavelet solution (where w=10, Δt=0.002,σ=3.2Δ)

1090

Z.H. Huang, L. Xia, and X.P. He

5 Conclusion In the paper, a new quasi-wavelets method for numerical application is introduced. In fact, their numerical solution is extremely approximate with analytical solutions and solving PDEs. The latter has won great success in wide application to various on PDEs.

References 1. Whutham, G. B.:Variational methods and applications to water waves. Proc Roy London,1967:(220A):6--25. 2. Broer, L. J. F. :Approximate equations for long water waves. Appl Sci Res,1975: (31): 337--396. 3. Kupershmidt, B. A.: Mathematics of dispersive waves. Comm Math Phys.1985: (99):51--73. 4. Wang, M.L.: A nonlinear function transform action and the exact solutions of the approximate equations for long waves in shallow water. Journal of Lan Zhou University. Natural Sciences 1998 34 2 21--25. 5. Huang, Z.H.: On Cauchy problems for the RLW equation in two space dimensions. Appl Math and Mech , 2002 (23) 2:169--177. 6. Morlet ,J., Arens, G., Fourgeau, E.,Et al.: Wave propagation and sampling theory and complex waves.Geophysics, 1982, 47(2):222--236. 7. Wei, G. W.: Quasi wavelets and quasi interpolating wavelets. Chen Phys.Lett, 1998. 296(3~4): 215--222. 8. Wan, D. C. ,We,i G. W.: The Study of Quasi-Wavelets Based Numerical Method Applied to Burger’ Equations. Appl.Math. Mech, 2000.(21) 1099.

) , ( ): :



Plant Simulation Based on Fusion of L-System and IFS Jinshu Han Department of Computer Science, Dezhou University, Dezhou 253023,China [email protected]

Abstract. In this paper, we present a novel plant simulation method based on fusion of L-system and iteration function system (IFS). In this method, the production rulers and parallel rewriting principle of L-system are used to simulate and control the development and topological structure of a plant, and IFS fractal graphics with controllable parameters and plenty subtle texture are employed to simulate the plant components, so that the merits of both techniques can be combined. Moreover, an improved L-system algorithm based on separating steps is used for the convenient implementation of the proposed fusing method. The simulation results show that the presented method can simulate various plants much more freely, realistically and naturally. Keywords: Fractal plant; L-system; Iteration function system.

1 Introduction Along with the computer graphics development, the computer simulation for various natural plants as one of the research hotspots in computer graphics domain has the widespread application prospect in video games, virtual reality, botanical garden design, and ecological environment simulation. Nevertheless, at present, commercial image and graph processing software based on Euclidean geometry only can draw some regular geometric shapes, and can’t describe the natural plants with complex structures, subtle details and irregular geometric shapes realistically. Although we can use complex techniques such as curves or surfaces to approximate to such natural objects, we must ensure providing adequate memory spaces and high rendering speeds. On the other hand, fractal geometry takes irregular geometric shapes as research objects. One of the fractal typical application domains is natural objects simulation. In plant simulation, there are two common methods: L-system and iteration function system (IFS). Previous works[1,2,3,4,5] mainly focus on plant simulation only by L-system or only by IFS, and can’t provide realistic and natural representations for plant constructs and shapes due to the limitation of both methods. Obviously, combining them may be better. However, how to combine them is worth researching further. In response to such problem, a novel plant simulation method based on fusion of L-system and IFS is presented. The simulation results show the feasibility and validity of the presented method. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1091–1098, 2007. © Springer-Verlag Berlin Heidelberg 2007

1092

J. Han

2 Characteristics of L-System and IFS L-system[2] is a rewriting system based on symbols. It defines a complex object through replacing every part of the initial object according to rewriting rules. For a certain natural plant, the simulation processes are as follows: Firstly, set of production rules is abstracted according to shapes and development rules of the plant. Secondly, starting from the initial string, a new string is created recursively according to the production rules. Thirdly, the created string is interpreted using turtle graphics algorithm and the corresponding geometric object is drawn. In plant simulation, L-system can better simulate development rules and topological structure of a plant, which is the main merit of L-system. For example, L-system can describe the development rules and structures of branches, leaves, flowers and seeds freely, because of the fact that the parallel principle of L-system is similar to the parallel growth course of plants. However, comparing with a natural plant, the plant graphics drawn by L-system is lack of texture. The key reasons are as follows: Firstly, the plant branches are represented by lines. Secondly, L-system only considers the self-similarity of the topological branching structure of the whole plant, and doesn’t consider that the self-similarity also exists in every component of plant, such as a bark or a leaf. IFS[3] represents a plant shape based on collage theorem. For a certain plant image, several sub-images generated by contraction affine transform are the little scaled version of the original image. They try to cover original image accurately, and they are permitted to overlap each other partly. So a set of contraction affine transforms are as follow:

⎡ x ⎤ ⎡r cosθ W = {R 2 : ω1 ,ω 2 ,...,ω n } ωi ⎢ ⎥ = ⎢ ⎣ y ⎦ ⎣ r sin θ

− q sin φ ⎤ ⎡ x ⎤ ⎡ e ⎤ ⎥ ⎢ ⎥ + ⎢ ⎥, i = 1,2..., n q cosφ ⎦ ⎣ y ⎦ ⎣ f ⎦

(1)

Where, r, q are scaling factors in x or y direction, and θ , φ are rotation angles between sub-images and the original image. e, f are translation distances of subimages in x or y direction. In addition, based on proportion of sub-images’ areas, an array of accumulative probabilities associated with W is gotten. It is n

P = { p1 , p2 ,..., pn } , where pi > 0,

∑p

i

= 1, i = 1,2,..., n

(2)

i =1

The above W and P form IFS. Commonly, the random iteration algorithm is used to draw IFS fractal graphics. Firstly, ( x0 , y0 ) is the initial point. Next, an affine transform is selected from W according to P , and new coordinate point ( x1 , y1 ) is computed. Then ( x1 , y1 ) is looked as initial point. Repeat the process, and the orbit of all points form a shape similar to initial image. We call it the attractor of IFS. In plant simulation, IFS can better represent the subtle texture of a plant and the implementation program is simple, which is the main merit of IFS. Nevertheless, IFS is lack of representation and control for the topological structure of a plant. In other words, IFS simulates a plant only according to the features of the plant’s geometric

Plant Simulation Based on Fusion of L-System and IFS

1093

shapes, and IFS can’t represent physiological features of the plant. Therefore simulated plants are machine-made, have few differences from each other. The key reason is: for the plant simulated by IFS, both the whole and the parts have strict selfsimilarity. Namely, the length and amount of branches of trunks obey the same strict self-similarity rules as those of branches or sub-branches, and the self-similarity rules exist in more little detail sequentially. Therefore, it is difficult in simulating a natural plant which has plenty texture and various complex structure and shape only by L-system or only by IFS.

3 Plant Simulation Method Based on Fusion of L-System and IFS Based on above analysis, a novel plant simulation method based on fusion of L-system and IFS is presented. The basic idea is as follows: L-system is used to simulate the diverse development rules and topological structure of a plant, and IFS is employed to simulate the textured plant components, such as branches and leaves. Therefore, the merits of both techniques can be combined. For one thing, through L-system, we can freely control and simulate the development rules and topological structure of plant. For another, through IFS we can quickly simulate the self-similarity and plenty texture of the plant components. The process of combining L-system with IFS is similar to that of building the building blocks. For one thing, the production rules and parallel rewriting principle of L-system are regarded as building rules for building blocks. For another, IFS fractal graphics with controllable parameters and plenty subtle texture are regarded as various building blocks. Therefore, we can build the IFS building blocks according to L-system rules, so that realistic simulated plant shapes are created. Next, taking Fig.1 as the example, we introduce the implementation process of the novel method. Fig.1 (a) is a leaf created by IFS[6]. It is regarded as a leaf building block. The IFS algorithm of the leaf can be encapsulated as a function defined as leaf ( x, y, r , k1 , k 2 ) . Where x, y is the value of x-coordinate or y-coordinate of the petiole’ bottom endpoint, r is the rotation angle of the whole leaf, k1 , k 2 is the scaling factor in x or y direction. If we change the values of x, y, r , k1 , k 2 , or the five controllable parameters, we can create various leaves with different sizes, directions and coordinate positions. Obviously, the leaf building block is not invariable, on the contrary, its size, direction and coordinate position are changeable and controllable. Fig.1 (b) shows branches created by L-system. L-system is a formal language, which models objects through giving geometric interpretation to the character in string. For instance, character F is a basic symbol in L-system, while drawing, F is interpreted as Moving forward by line length drawing a line. However, in this paper, character F can be interpreted as a novel meaning. For example, some F in the character string can be interpreted as Moving forward drawing a leaf by calling the function leaf ( x, y, r , k1 , k 2 ) . It means that the present values of coordinate position and drawing direction or angle are regarded as actual parameters of function leaf () . Therefore, An IFS leaf replaces the traditional L-system’ simple line. At the same time, next coordinate values and angle are computed to prepare for next drawing step.

1094

J. Han

Under the plant growth rules shown in Fig.1 (b), the IFS leaf building block with controllable parameters shown in Fig.1 (a) is called, so a plant with leaves in certain growth position is created as shown in Fig.1 (d). In order to simulate textured branches, we define a branch building block shown in Fig.1(c). The branch building block is created by IFS[7].The IFS algorithm of the branch is encapsulated as a function defined as trunk ( x, y, r , k1 , k 2 ) , the meaning of every parameter is similar to that of function leaf () . Fig.1 (e) is a modeling plant using L-system to combine IFS leaves and IFS textured branches together.

(a)

(b)

(c)

(d)

(e)

Fig. 1. Process of simulating plant by combing L-system with IFS: (a) An IFS leaf. (b) A modeling branch with L-system. (c) An IFS textured branch. (d) Plant created with modeling branches and IFS leaves. (e) Plant created with modeling branches, IFS leaves, and IFS textured branches.

Through above process, we may see that as far as the method based on fusion of Lsystem and IFS, IFS is used to create various graphics building blocks representing components of plant, and we can freely modify set of production rules of L-system, namely, building rules for building blocks can be modified. Actually, L-system determines the development rules and the whole structure of a plant, IFS determines the modeling and texture of every basic part of a plant. Through combining the merits of both techniques, we may simulate the topological structure and subtle texture of a plant freely.

4 Improved L-System Implementation Algorithm In order to create a realistic and natural modeling plant, we commonly request that any component of the plant, such as a trunk, a branch or a leaf can be changed freely in width, length, color and growth direction. Therefore, in implementation process of the fusing method, according to actual demands, L-system must freely adjust or control the function parameters of plant components created by IFS.

Plant Simulation Based on Fusion of L-System and IFS

1095

The implementation algorithm of traditional L-system is as follows: Firstly, beginning with the initial string, according to set of production rulers, the string is rewritten repeatedly to create a long string until the iteration number is reached. Secondly, the long string is interpreted and drawn. This kind of final one-off drawing process is not flexible enough. It is difficult in controlling the parameters of plant components freely. Therefore, this paper presents the improved implementation algorithm of L-system. The final one-off drawing process is separated into several steps, namely, the string is rewritten once, then it is interpreted and drawn once, only the new created parts are drawn every time. Fig.2 shows the steps of drawing plant based on improved algorithm, where n is iteration number. Fig.2 (a) shows the father branches, where n = 1 . In the figure, numbers 1, 2, 3 label the points called growth points where the first generation sub-branches start to draw. Once the father branches have been drawn, the position and direction of these growth points are recorded, preparing for drawing the first generation sub-branches in next step. Fig.2 (b) shows the first generation sub-branches drawn from the growth points according to production rules, where n = 2 . At the same time, the growth points of the second generation sub-branches, labeled by numbers from 1 to 9, are recorded. Fig.2 (c) shows the third generation sub-branches drawn from growth points recorded in Fig.2 (b). Repeat above steps, and a plant with complex shape and topological structure is created.

(a)

(b)

(c)

Fig. 2. Process of drawing plant based on the improved L-system algorithm: (a) n = 1 . (b) n = 2 . (c) n = 3 .

The improved algorithm based on separating steps can control the drawing plant process flexibly. For example, during the process of simulating plant, according to actual needs, we can freely control the number of growth points based on the value of n . We can control and adjust the parameters of width, length, color and growth direction of certain branches freely. We can also freely control that the next generation to be drawn is leaf or flower based on n . Therefore, L-system and IFS are combined into an organic whole, and can realize better plant simulation freely.

1096

J. Han

5 Results Based on the fusion algorithm of L-system and IFS, many plant simulation experiments have been done. And during the experiments, for every drawing parameter involved in L-system and IFS, modest random factor is added in the drawing process. Fig.3, Fig.4, and Fig.5 are parts of the typical experiment results. Fig.3 shows the simulation results of four maples. In Fig.3 (a)(c)(d)(e), the development rulers of L-system are used to combine IFS branch building blocks with IFS maple leaf building blocks[8] shown in Fig.3(b), and random factors are added in the process. The simulation results show the four maples have similar natural structure and subtle texture, but their branches and leaves in different ranks have modest differences in width, length, color and growth direction. Similarly, Fig.4 and Fig.5 are also created by the improved methods presented in this paper. The development rulers of L-system are used to organize IFS branch building blocks and IFS leaf building blocks into an organic whole.

(a)

(b)

(c)

(d)

(e)

Fig. 3. Plant simulation results 1: (a) Maple 1. (b) Maple leaf. (c) Maple 2. (d) Maple 3. (e) Maple 4.

6 Conclusion This paper presents an improved method for plant simulation based on fusion of Lsystem and IFS. In this method, L-system is used to simulate the random development and topological structure of plant, IFS is used to simulate the self-similarity and subtle texture existing in components of plant, so that more natural and realistic simulated plant can be created. In addition, based on improved method presented in this paper, three dimension Lsystem and lighting effects can be employed to create much more realistic and natural simulated plants.

Plant Simulation Based on Fusion of L-System and IFS

1097

(d)

(a)

(b)

(c)

(e)

Fig. 4. Plant simulation results 2: (a) Arbor. (b) A arbor leaf[6]. (c) Bamboo. (d) A bamboo pole. (e) A bamboo leaf.

(a)

(b)

Fig. 5. Plant simulation results 3: (a) Shrub. (b) A flower.

Acknowledgments We thank Q.Z Li and anonymous reviewers for their suggestions and comments.

References 1. Jim Hanan.: Virtual plants-integrating architectural and physiological models. In: Environmental Modeling & Software 12(1)(1997)35-42 2. Prusinkiewicz P.: Modeling of spatial structure and development of plants: a review. In: Scientia Horticulturae 74(1998)113-149

1098

J. Han

3. Slawomir S. Nikiel.: True-color images and iterated function systems. In: Computer & Graphics 22(5) (1998)635-640 4. Xiaoqin Hao.: The studies of modeling method of forest scenery for three dimension iterated function system. In: Chinese Journal of Computers 22(7) (1999)768-773 5. Zhaojiong Chen.: An approach to plant structure modeling based on L-system. In: Chinese Journal of Computer Aided Design and Computer Graphics 12(8) (2000)571-574 6. Sun wei, Jinchang Chen.: The simple way of using iteration function system to obtain the fractal graphics. In: Chinese Journal of Engineering Graphics22(3) (2001)109-113 7. Chen qian, Naili Chen.: Several methods for image generation based on fractal theory. In: Chinese Journal of Zhejiang University (Engineering Science) 35(6) (2001)695-700 8. Hua jie Liu, Fractal Art. Electronic edition. Electronic Video Publishing Company of Hunan. Hunan China(1997)chapter 5.4

Appendix The related data and IFS code of the figures in the paper are as follows: where a = r cos θ , b = − q sin ϕ , c = r sin θ , d = q cos ϕ . Table 1. Affine transform parameters in Fig.1 (c)

ω

a 0.5 0.5 0.5 0.5

1 2 3 4

b 0.5 0.5 0.5 0.5

c 0.0 0.0 0.0 0.0

d 0.0 0.0 0.0 0.0

e 0 50 50 50

f 0 0 50 50

p 0.15 0.35 0.35 0.15

In Fig.4 (d), p of ω3 in Table 1 is changed into 0.15. Other parameters aren’t changed. Table 2. Affine transform parameters in Fig.4 (e)

ω 1 2 3 4

a b c 0.29 0.4 -0.4 0.33 -0.34 0.39 0.42 0.0 0.0 0.61 0.0 0.0

d 0.3 0.4 0.63 0.61

e 0.28 0.41 0.29 0.19

f 0.44 0.0 0.36 0.23

p 0.25 0.25 0.25 0.25

In Fig.5 (b), the flower is created recursively based on the equations as follows: ⎧ X n +1 = bYn + F ( X n ) , F ( X ) = aX + 2(1 − a ) X /(1 + X 2 ) ⎨ Y = − X + F X ( ) n n +1 ⎩ n +1

where, a = 0.3, b = 1.0

(3)

A System Behavior Analysis Technique with Visualization of a Customer’s Domain Shoichi Morimoto School of Industrial Technology, Advanced Institute of Industrial Technology 1-10-40, Higashi-oi, Shinagawa-ku, Tokyo, 140-0011, Japan [email protected]

Abstract. Object-oriented analysis with UML is an effective process for software development. However, the process closely depends on workmanship and experience of software engineers. In order to mitigate this problem, a precedence effort, scenario-based visual analysis, has been proposed. The technique visualizes a customer’s domain, thus it enables requirement analyzers and customers to communicate smoothly. The customers themselves can schematize their workflows with the technique. Consequently, the analyzers and customers can easily and exactly derive use case models via the collaborative works. Thus, this paper proposes a technique to advance the analysis further, inheriting such advantages. The extended technique can analyze initial system behavior specifications. The customers can also join and understand system behavior analysis, thus they can easily decide on specifications for developing systems. Keywords: Activity diagrams, Model-based development.

1

Introduction

Requirement analysis and specifications for software are important factors to make success of software development and because quality of the analysis affects quality of the software, it is the most important process. Thus, various analysis techniques were proposed; especially, Object-Oriented Analysis (OOA) with UML1 is most widely used to model a domain of customers. After the modeling, customers and developers can understand and analyze the domain and systematically decide requirement specifications [6]. However, because the developers must fully analyze a domain of customers based on OOA, quality of the analysis is dependent on their capability. In order to mitigate the problem, Scenario-based Visual Analysis, SVA for short, was proposed [3, 2]. In SVA, analyzers and customers can analyze requirements in a domain and elicit use cases from very simple workflow scenarios cooperatively. That is to say they can easily understand, schematize and analyze the domain in a much simpler manner than using UML. On the other 1

Unified Modeling Language: http://www.omg.org/UML/

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1099–1106, 2007. c Springer-Verlag Berlin Heidelberg 2007 

1100

S. Morimoto

hand, difficulty and quality of system behavior analysis in OOA with UML is dependent on workmanship and experience of designers likewise. Therefore, this paper proposes a system behavior analysis technique utilizing resources which are generated in the SVA processes. One can model not only use cases but also system behavior with the technique. Moreover, the technique visualizes a domain of customers, thus it enables designers and customers to communicate smoothly. Consequently, the customers can easily and exactly decide requirements to the developers.

2

Process of the System Behavior Analysis Technique

We herein explain the process of SVA and the system behavior analysis. 2.1

The Process of the Scenario-Based Visual Analysis

SVA can be adaptable to the requirements phase of object-oriented software development. That is, use case diagrams can be obtained via the collaborative works on the process. Use cases and actors are generally found out from a conceptual map, named business bird’s eye view (BEV), through arranging icons which indicate subjects, verbs, and nouns in workflow scenarios. A software tool named SVA editor is also provided to support the operations [3]. Analyzers can systematically perform the analysis and customers can easily join to the process. Furthermore, in the last phase of the process, both of them can collaboratively and visually decide which part of the tasks in the scenario should be implemented as software on a BEV. In SVA, analyzers use workflow scenarios in order to capture a customer’s business domain. A BEV is created from the workflow scenarios to obtain a conceptual map of the domain. The BEV will be arranged to clarify the whole structure of the elements which constitutes the workflow. Finally, use case diagrams will be elicited from the BEV. The process of SVA is performed as following steps; 1. 2. 3. 4. 5. 6.

Customers describe workflow scenarios. Analyzers form an initial BEV from the scenarios. The analyzers arrange the BEV to cluster verbs. The analyzers and the customers analyze roles of the subjects. The analyzers consider the system boundary. The analyzers obtain use case diagrams.

SVA defines some rules to write workflow scenarios as follows. – Use simple sentences with one verb and several nouns which might involve articles, propositions, and adjectives. Other parses, such as adverbs, can be used but are not analyzed in SVA. – Use active sentence. Do not use passive voice. The order of the words is such like Subject-Verb-Objects.

A System Behavior Analysis Technique with Visualization

1101

Since task statements of customers are described in the workflow scenarios using simple natural language, they can easily confirm the correctness of the contents. The analyzers form a BEV from each sentence in the workflow scenarios. Rectangle icons and oval icons are used to stand for nouns and verbs, respectively. A subject and the other icons are connected by lines from a verb. The line to the subject is drawn as a solid line and lines to the other icons are drawn as broken lines. After all the sentences in the workflow scenarios have been visualized on the BEV, the analyzers synthesize them. Same nouns are shared by every sentence in which they are used. Verbs are not shared even if the same verbs appear in several sentences. After the synthesis, the analyzers rearrange elements so that the BEV will become legible. This arrangement also produces a semantic relation structure on the conceptual map. During this arrangement to clarify, the analyzers have to take care to find out semantic clusters of verb icons. This process is necessary to analyze roles of subjects in the next step where noun icons will be analyzed. In the next step, the analyzers abstract subjective noun icons as roles of tasks. If a subjective noun icon is connected from some clusters of verbs, the subjective noun has some different roles. In such cases, a single subject is decomposed to multiple roles. Each of them is reconnected to the verb clusters. After the role analysis, both the analyzers and customers decide a system boundary on the rearranged BEV. This decision is made by both the analyzers and the customers cooperatively. The boxed line is used to indicate what part of the tasks is developed to be a software system. After the boundary is drawn, the analyzers have to consider what nouns should be moved into the boundary. For example, they must decide whether or not physical objects are implemented in software. In such situations, the editable map, i.e., the BEV acts as a communication tool between the analyzer and customer. They can have a visual support, i.e., the map, to discuss about the scope of the system which is going to be developed. Generating a use case diagram is the final step of SVA. The way how to elicit actors and use cases is very simple. The role icons connecting to the verbs on the system boundary line and located outside of the box become actors. The verbs on the boundary which connect to the actors become use cases. You can get the further details of the SVA process from the references [3, 2]. 2.2

Artifacts of the System Behavior Analysis Technique

In the system behavior analysis of OOA with UML, interaction or activity diagrams are generally used. Interaction diagrams illustrate how objects interact via messages. Activity diagrams are typically used for business process modeling, for modeling the logic captured by a single use case or usage scenario, or for modeling the detailed logic of a business rule [1]. Because activity diagrams are closely related with scenarios in natural language and are suitable for system behavior analysis, the objective of our technique is to design activity diagrams from the results in SVA (i.e., workflow scenarios, a BEV, and use case diagrams).

1102

2.3

S. Morimoto

The Procedure of the System Behavior Analysis Technique

The system behavior analysis technique effectively utilizes the artifacts of SVA. Designers first select a target use case from an elicited use case diagram in SVA. Secondly, the designers extract the source verb icon of the selected use case and all noun icons which are connected with the verb icon from the BEV. In the third step the designers and customers cooperatively analyze activities from the extracted icons and the source sentence of the icons in the workflow scenario. Next the designers draw partitions of an activity diagram based on the actors which are connected with the selected use case in the use case diagram. Then the designers set the elicited activities on the partition which is drawn from the actor of the activity’s subject in chronological order. The designers repeat the above steps to all use cases in the use case diagram. The following is the procedure of the analysis. 1. After having used SVA, select a use case in a use case diagram. 2. Extract the source verb icon of the selected use case and all noun icons which are linked with the verb icon in the BEV. 3. Elicit activities from the extracted icons and the source sentence of the icons in the workflow scenario. 4. Draw partitions from actors which are linked with the selected use case in the use case diagram. 5. Put the activities on the corresponding partition in chronological order.

3

Application

In order to intelligibly demonstrate the steps of the process in detail, we present an actual consulting example. The business domain of this example is a hospital, where a new software system is needed to support their daily tasks. Especially, the staff’s demand is that they want to develop the online medical record chart system for doctors. 3.1

Use Case Analysis

The objective of this paper is not to show the use case analysis of SVA, thus we show only the outline of the SVA phase. The workflow scenario on the next page Fig. 1 shows the business domain of the first medical examination. The parts which are surrounded by square brackets denote modifications in the revision phase of SVA. These parts were elicited by collaborative discussion of the analyzers and customers. First, all the workflows were modeled into BEVs. Secondly, the BEVs were synthesized into one diagram. Thirdly, the verb icons were grouped and roles of the subjective nouns were analyzed in the synthesized BEV. The BEV on the next page Fig. 2 was finally modeled from the workflow scenario. The verb icons were classified into the clusters (A), (B), (C), (D), (E), (F), (G), (H), (I), (J), (K), and (L).

A System Behavior Analysis Technique with Visualization

1103

1. The patient submits the insurance card to the receptionist. 2. The receptionist inquires of the patient the symptom. 3. The receptionist decides the department from the symptom. 4. The receptionist makes the patient fill the application. 5. The receptionist makes the medical chart from the application [and the insurance card]. 6. [The receptionist makes the consultation card from the insurance card]. 7. The receptionist brings the medical chart to the surgery of the department. 8. The receptionist hands the nurse in the surgery the medical chart. 9. The nurse calls the patient in the waiting room. 10. The patient enters the surgery. 11. The doctor examines the patient. 12. The doctor gives the medical treatment to the patient. 13. The doctor writes the medical treatment to the medical chart. 14. The patient leaves the surgery. 15. The patient goes the waiting room. 16. The nurse brings the medical chart to the reception. 17. The receptionist calculates the medical bill [from the medical treatment in the medical chart]. 18. The receptionist calls the patient in the waiting room. 19. The receptionist charges the patient the medical bill. 20. The patient pays the receptionist the medical bill. 21. The receptionist hands the patient the consultation ticket [and the insurance card]. 22. The receptionist puts the medical chart on the cabinet.

Fig. 1. The workflow scenario of the first medical examination

insurance

hospital business boundary

card

(A)

submint

make

ticket hand symptom inquire

chart system boundary

(G)

consultation

(H)

(X)

make transfer clerk

doctor’s bill

decide

(B)

bring

application

(Y)

fill

chart making clerk ticket making clerk accounting clerk

card carrier charge

receptionist

(C) patient

pay

(D)

call

claimant calling clerk

(J)

(L)

put

calcurate

write

bill waiting

enter

room

(E) leave

examine

over

doctor's

call

surgery examining clerk

entering clerk

chart

hand

bring

(K)

go

(F)

(I)

medical

(Z)

Fig. 2. The BEV of the hospital business

chart receiver cabinet

medical treatment

1104

S. Morimoto

The subjective nouns which are linked with two or more verbs were decomposed into some roles. Consequently, the use case diagram in Fig. 3 was also composed from the chart system boundary on Fig. 2. Use cases are the verb clusters (G), (H), (K), and (L) on the boundary. Actors are the roles (Y) and (Z) which are connected to the verb clusters with the solid line. Two actors and four use cases are elicited and named adequately. The foregoing operations are the SVA outline.

(Y)

receptionist (Z)

Online chart system (G) make a chart (H) make a consultation ticket (K) calculate a doctor’s bill (L) write a medical treatment

doctor Fig. 3. The use case diagram of the online medical record chart system

3.2

System Behavior Analysis

We analyze the system behavior from the workflow, the BEV, and the use case diagram of Section 3.1. First, we select the use case “make a chart.” This use case was obtained from the part (G) in the BEV. The verb icon is “make” on the part (G) of Fig. 2 and the noun icons which are directly linked to it are “insurance card,” “application,” and “medical chart.” These are the first and second processes of the system behavior analysis mentioned in Section 2.3. Next, see the source sentence of the icons in the scenario Fig. 1. The sentence corresponds to “5. The receptionist makes the medical chart from the application and the insurance card.” That is, it is clarified that the information of “application” and “insurance card” is required in order to make a medical chart. Therefore, the activities “input the information on an application” and “input the information on an insurance card” are identified. We used the nouns “application” and “insurance card,” however the noun “medical chart” is not yet used. The business domain of this example is the first medical examination, thus the patient visits the hospital surely for the first time. Because there is no chart of the patient, the receptionist must make a new chart for the patient. That is, before the foregoing activities, the activity “make a new chart data” is required. The above operation is the third process of the system behavior analysis. Finally, make a partition from the actor linked to the use case, i.e., “receptionist” and put the activities on the partition in chronological order. In consequence, the activity diagram in Fig. 4 is composed.

A System Behavior Analysis Technique with Visualization

1105

receptionist

mak e a new chart data

input the information on an application

input the information on an insurance card

Fig. 4. The activity diagram of the use case “make a chart” receptionist

make a new cons ultation

receptionist

doctor

input a medical treatment

g et a medical chart data

ticket data

input the information of

calculate a doctor’s bill

a medical chart

print out a

input the information of a medical treatment

print out a receipt

consultation ticket

Fig. 5. The activity diagrams of the other use cases

Similarly, we analyze the use case “make a consultation ticket.” This use case was obtained from the part (H) in the BEV. The verb icon is “make” on the part (H) of Fig. 2 and the noun icons which are directly linked to it are “consultation ticket” and “medical chart.” The source sentence corresponds to “6. The receptionist makes the consultation card from the insurance card.” The information of “medical chart” is required in order to make a consultation ticket. Therefore, the activity “input the information on a medical chart” is clarified. Because there is no consultation ticket of the patient like the above analysis, the activity “make a new consultation ticket data” is required. Moreover, if you pursue the connection of the icon “consultation ticket,” it will be clarified that the receptionist must hand the patient the actual object, i.e., the ticket. Accordingly, the receptionist must print out the ticket with the online chart system; the activity “print out a consultation ticket” is identified. Since the third process of the system behavior analysis is a group work, the designers can easily get customer’s consent about such activities. The other use cases can be analyzed in the same procedure likewise. The diagrams are shown in Fig. 5. Moreover, the technique is adaptable to the business modeling [5]. SVA can also analyze business use cases. If the boundary for the hospital business is drawn

1106

S. Morimoto

in Fig. 2, the business use case diagram is formed in the same way. Similarly, a business activity diagram can be elicited from the workflow scenario. The subject in each workflow may become a partition of a business activity diagram and each workflow sentence excepted the subject clause may become a business activity.

4

Concluding Remarks

In this paper, we have proposed a system behavior analysis technique utilizing the scenario-based visual analysis. Software designers can obtain activity diagrams according to the process of the technique. Since the technique utilizes the scenario-based visual analysis, the designers can understand customer’s business domain. Moreover, since the rules of the scenario description are very simple, the customers who fully understand the business domain can write the workflow scenarios. Consequently, they can have common understanding via the group works. That is, they can easily decide specifications on software development. Henceforth, we will apply the technique to further practical subjects. The activity diagrams of the example were not complex; they do not use condition or decision. Moreover, the technique designs only activity diagrams now. If class diagrams can be elicited from workflow scenarios, sequence and state machine diagrams can also be designed. In the business bird’s eye view, designers can easily decide classes, because noun factors in the customer’s domain are clarified. However, in order to identify attributes and methods of classes, it may be necessary to add further information, e.g., a picture. In the case of the example in this paper, maybe the picture of the medical chart is required for class design. We are improving the technique and developing a tool to aid this idea. Acknowledgement. My special thanks are due to Dr. Chubachi for his advice.

References 1. Ambler, S. and Jeffries, R.: Agile Modeling, Wiley (2002) 2. Chubachi, Y., Kobayashi, T., Matsuzawa, Y., and Ohiwa, H.: Scenario-Based Visual Analysis for Use Case Modeling, IEICE Transactions on Information and Systems, Vol. J88-D1, No.4 (2005) 813–828 (in Japanese) 3. Chubachi, Y., Matsuzawa, Y., and Ohiwa, H.: Scenario-Based Visual Analysis for Static and Dynamic Models in OOA, Proceedings of the IASTED International Conference on Applied Modelling and Simulation (AMS 2002), ACTA Press (2002) 495–499 4. Jacobson, I., Booch, G., and Rumbaugh, J.: The Unified Software Development Process, Addison-Wesley (1999) 5. Jacobson, I., Ericsson, M., and Jacobson, A.: The Object Advantage - Business Process Reengineering with Object Technology, Addison-Wesley (1996) 6. Kruchten, P.: The Rational Unified Process: An Introduction, Addison-Wesley (2003)

Research on Dynamic Updating of Grid Service Jiankun Wu, Linpeng Huang, and Dejun Wang Department of Computer Science, Shanghai Jiaotong University, Shanghai, 200240, P.R. China [email protected] [email protected] [email protected]





Abstract. In complicated distributed system based on grid environment, the grid service is inadequate in the ability of runtime updating. While in the maintenance of systems in grid environment, it is an urgent issue to solve to support the transparent runtime updating of the services, especially in the case of services communicating with each other frequently. On the basis of researches on the implementation of grid services and interaction between them following WSRF [3], this paper introduces proxy service as the bridge of the interaction between services and achieved the ability to support the runtime dynamic updating of grid services. Gird service updating must happen gradually, and there may be long periods of time when different nodes run different service versions and need to communicate using incompatible protocols. We present a methodology and infrastructure that make it possible to upgrade grid-based systems automatically while limiting service disruption. Keywords: Grid service, Dynamic updating, Proxy service, Simulation service.

1 Introduction With the change of application requirements and wide use of Internet, the across-area and across-organization complicated applications have developed greatly in various fields. The distributed technology has become the main method in these applications. Accompanying with the system expanding day by day, the maintenance and modification of the system became more frequent. Research shows that nearly half of costs are spent in the maintenance of the complicated distributed system. Services have to be paused in the traditional procedure of software maintenance, but some system such as Bank must provide services continuously in 24 hours, any short-time pause will make great lost. How to resolve the dilemma? Answer is Dynamic Updating technology. Software Updating is defined as the dynamic behavior including software maintenance and update in the life-cycle of software system [6]. Due to maintaining the system with whole system working normally, Dynamic Updating is significant. The grid computing technology is the latest achievement of the development of distributed technology, aiming to resolve the resource share and coordination in WAN distributed environment and avoid the drawbacks such as inadequate computation ability or unbalance loads[1][7][8]. It is a trend to develop new complicated system based on grid technology and transplant the current system into grid environment. Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1107–1114, 2007. © Springer-Verlag Berlin Heidelberg 2007

1108

J. Wu, L. Huang, and D. Wang

As application system based on grid technology consists of services with specific function, the system maintenance is mainly maintaining services. As the same as other distributed systems, the services maintenance in grid application system still face the problem of terminating service. Therefore it is necessary to introduce Dynamic Updating technology in the service maintenance of grid environment. It has more practical significance especially when the built system is in its long-time running. In the current grid technology, if we want to update or modify the working grid services, we must stopped and start new service to run. This model is inadequate in dynamic switch in running service. The substitution of grid service will make part of the system terminated or produce long delay, especially in the case of service communicate with each other frequently. The proxy service and simulation service are introduced in the architecture supporting grid service updating. The proxy service is not only responsible for the service requests transmitting but also responsible for contacting with updating component information service through subscribe/publish styles to obtain the new version information in time. The interaction and interface of different service will be transparency by introducing proxy service. And the simulation service is responsible for simulating of behavior and state format between different versions of service. This paper presents a flexible and efficient updating method that enables gridbased systems to provide service during updating. We present a new methodology that makes it possible to updating grid-based systems while minimizing disruption and without requiring all upgrades to be compatible. The rest of paper is organized as follow. In section 2, the updating requirements of grid service is discussed. In section 3, the architecture and relative technology supporting grid service updating are presented and discussed. In section 4, prototype system and relative tests are described. Finally, summary and future works are given.

2 Architecture Supporting Grid Service Updating The architecture models a grid-based system as a collection of grid services. A service has an identity, a collection of method that defines its behavior, and a collection of resource representing state. Services communicate by sending soap message. A portion of a service’s state may be persistent. A node may fail at any point; when it node recovers, the service reinitializes itself from the persistent portion of its state and when updating, the persistent state may need change the data format for the new version. To simplify the presentation, we assume each node runs a single top-level service that responds to remote requests. Thus, each node runs a top-level service—proxy grid service. An upgrade moves a system from one version to the next by specifying a set of service updating, one for each grid service that is being replaced. The initial version has version number one and each subsequent version has the succeeding version number.

Research on Dynamic Updating of Grid Service

1109

2.1 System Architecture A class updating has six components: . OldService identifies the service that is now obsolete; newService identifies the service that is to replace it. TF identifies a transform function that generates an initial persistent state for the new service from the persistent state of the old one. SF identifies a scheduling function that tells a node when it should update. PastSimulationService and futureSimulationService identify services for simulation objects that enable nodes to interoperate across versions. A futureSimulationService allows a node to support the new service’s behavior before it upgrades; a pastSimulationService allows a node to support the old service’s behavior after it upgrades. These components can be omitted when not needed.

Fig. 1. Updating architecture

2.2 Analysis of Proxy Service Mechanism The object of proxy service introducing is obtaining the transparency between services. When the grid service is updating, the other grid services in the same grid system will be not aware of it. The proxy service is not only responsible for the service requests transmitting but also responsible for contacting with updating component information service through subscribe/publish styles to obtain the new version information in time. 2.3 Version Management Because the updating doesn’t complete in twinkling, it is necessary to support the multi version coexist at same time. The simulation service is responsible for the simulating of different interface and saving and recovering of states between current version and old version, current version and new version services.

1110

J. Wu, L. Huang, and D. Wang

In order to accurately locate the simulation service of relative version for proxy service, each simulation service has a resource to hold the version information such as request interfaces, parameter formats, URL for the software data and so on. When the service updating happens, the work flow of the proxy service is showed as figure 3. The Proxy service gets the simulation service according to relative resource and then deliver the service request to the respect simulation service and return the result to the service requester in the finally.

Fig. 2. Version information in service updating

Fig. 3. Simulation procedure

2.4 Subscribe/Publish Model in Updating Procedure The subscribe/publish style is adopted for publishing service version changing information in order to make the updating information report more quickly and reduce the load of network. The proxy service in every grid node subscribes to the updating component information service for the services version changing information. Due to the proxy service receiving the information, it will make some actions according to the relationship between current service in the node and the new service. This style is more efficient than the continue request/report method in the traditional updating system. It makes the nodes deployed grid service focus on the main computing task without initiatively querying the new service version information all the time. As showed in figure 4, the proxy service is activated as a basic service when the grid service container starts running. At the same time, the proxy service initiatively

Research on Dynamic Updating of Grid Service

1111

subscribe to the service version changing information of updating component information service. The new version grid service is stored in the grid data base and the URL of the data base is hold in the resource of updating component information service. As the proxy services aware of the new service information of current service, it requests the Grid Ftp with URL information to transmit the software data to current node and deploys it. It reports the updating mid-states through request/reports method to updating information service and the states are also represented by resource which hold the URL of the data base storing the mid-states.

Fig. 4. Interaction model in updating platform

2.5 Scheduling of Service Management We could add filters to the model that would determine some subset of nodes that need to upgrade. Adding filters is enough to allow restructuring a system in arbitrary ways. In order to make the grid service dynamic updating more efficient, dynamic grid service updating scheduling which is based on monitoring the load of nodes is adopted in this paper. The performance evaluating model that bases on CPU frequency, CPU load, memory capacity, occupied proportion of memory, disk capacity and occupied proportion of disk is built. This model could make the updating procedure more efficient and reduce the service interruption. We adopt the following formula to define the evaluation.

ω = ωCPU + ωMEM + ωDISK

(1)

ωCPU = pCPU _ Freq ∗ CPU _ Freq + pCPU _ Load ∗ CPU _ Load

(2)

ω MEM = pMEM _ Cap ∗ MEM _ Cap + pMEM _ Occupied ∗ MEM _ Occupied

(3)

1112

J. Wu, L. Huang, and D. Wang

ω DISK = pDISK _ Cap ∗ DISK _ Cap + pDISK _ Occupied ∗ DISK _ Occupied

(4)

In the above formula, ω is the final evaluating parameter, ωCPU is CPU evaluating

parameter, ω MEM is memory evaluating parameter, ωDISK is disk evaluating parameter, p∗∗ is the coefficient of the relative ∗∗ ,and the ∗∗ is load parameter derived by monitor. Through the evolution of such parameter described above, updating administrator or updating management system will order the parameter according ω and select the light load nodes according a special scope to update firstly the new version service. The updating will be completed according to the similar rule until all the service in the system complete updating. 2.6 States Management States management reorganizes a service’s persistent state from the representation required by the old service to that required by the new service and the current service to that required by the past simulation service and future simulation service. Thus, client services do not notice that the service has upgraded, except that client services of the new type may see improved performance and fewer rejected requests, and client services of the old type may see decreased performance and more rejected requests. We adopted checkpointing technology [4][5][10][11] and process migration[12] technology to save the states of service states and recover the states for the new version service. 2.7 Updating Transactions of Grid-Based System Due to the failure of updating procedure, the recovering of updating failure should be considered. The updating transaction is adopted for managing the updating failure. The updating mid-state is stored in the database through resources of updating information service. When system updating failures, the administrator or updating managing system recover the system to the original point from the mid updating states stored in database and the updating of system seems doesn’t have happen. So the system updating procedure is an atom transaction [9]. The checkpointing technology [4][5][10][11] and process migration[12] technology are adopted as state saving technology and state recovering.

3 Prototype and Analysis In order to validate the method’s validity, we build a grid platform infrastructure which supports grid service dynamic updating. GT4[2] is adopted as the software platform and service is developed conform to WSRF[3] specification. The updating scheduling basing on monitor of computing resource with WS-MDS[2] will make the updating procedure more efficient through selecting more optimal updating subset of grid system nodes. The physical environment is showed as figure 5.

Research on Dynamic Updating of Grid Service

1113

Fig. 5. Grid environment supporting service updating

4 Summary and Future Work A grid service dynamic updating method in grid environment is presented in this paper and proxy service is introduced in this method for service request transmitting. The transparency between services could reach by introducing proxy service. The mechanism supporting multi version coexist at same time is introducing simulation service. Simulation service is responsible for simulating interface behavior and states format transferring of different versions. In the aspect of state transferring, we adopt a mature state transfer method used in other updating system. In the future, we will research more suitable state transferring mechanism for system constructed by grid service. Acknowledgments. This paper is supported by Grand 60673116 of National Natural Science Foundation of China, Grand 2006AA01Z166 of the National High Technology Research and Development Program of China (863).

References 1. Foster, I., Kesselman, C., Tuecke, S., “The Anatomy of the Grid: Enabling Scalable Virtual Organization”, International Journal of Supercomputer Applications, 2001.3, Vol. 15(3), pp200-222 2. Globus Toolkit 4.0. http://www.globus.org/, 2006.11 3. WSRF-The WS-Resource Framework. http://www.globus.org/wsrf/, 2006.5 4. Michael Hicks. Dynamic Software Updating. PhD thesis, Computer and Information Science, University of Pennsylvania, 2001

1114

J. Wu, L. Huang, and D. Wang

5. Peter Ebraert, Yves Vandewoude, Theo D’Hondt, Yolande Berbers. Pitfalls in unanticipated dynamic software evolution. Proceedings of the Workshop on Reflection, AOP and Meta-Data for Software Evolution(RAM-SE'05), 41-51 6. Yang Fu-qing, Mei Hong, Lu Jian , Jin Zhi. Some Discussion on the Development of Software Technology. Acta Electronica Sinica(in Chinese) ,2002, 30(12A):1901-1906 7. I Foster, C Kesselman. The Grid: Blueprint for a new computing infrastructure1.San Francisco: Morgan2Kaufmann ,1998 8. Ian Foster1, Carl Kesselman, et al. The Physiology of the Grid: An Open Grid Services Architecture for Distributed Systems Integration. http://www.globus.org/reserch/papers/ ogsa.pdf 9. Iulian Neamtiu, Michael Hicks, Gareth Stoyle, Manuel Oriol. Practical Dynamic Software Updating for C. Proceedings of the ACM Conference on Programming Language Design and Implementation (PLDI2006), pp72-83. 10. G. Bronevetsky, M. Schulz, P. Szwed, D. Marques, and K. Pingali. Application-level check pointing for shared memory programs. In Proc. ASPLOS, 2004. 11. J. S. Plank. An overview of checkpointing in uniprocessor and distributed systems, focusing on implementation and performance. Technical Report UT-CS-97-372, Computer Science Department, the University of Tennessee, 1997. 12. J. M. Smith. A survey of process migration mechanisms. ACM Operating Systems Review, SIGOPS, 22(3):28–40, 1988.

Software Product Line Oriented Feature Map Yiyuan Li, Jianwei Yin, Dongcai Shi, Ying Li, and Jinxiang Dong College of Computer Science and Technology, Zhejiang Univ., Hangzhou 310027, China [email protected], [email protected], [email protected], [email protected], [email protected]

Abstract. The core idea of software product line engineering is to develop a reusable infrastructure that supports the software development of a family of products. On the base of domain analysis, feature modeling identifies commonalities and variability of software products in terms of features to provide an acknowledged abstract to various stakeholders. The concept of feature map is proposed to perfect feature model. It supports customized feature dependencies and constraint expresses, provides the capability to navigate and locate the resource entities of features. Ontology is introduced as the representation basis for the meta-model of feature maps. By the means of selecting features to construct the reusable infrastructure, the components of feature implementation are rapidly located and assembled to produce a family of software products meeting certain dependencies and constraints. Keywords: Variability, Feature map, Resource navigation, Ontology.

1 Introduction Currently the manufacture of software is suffering from such problems as individual customized requirements and frequent changes of business requirements. As a result, it seems that traditional software development mode - which is to develop software product specifically for certain application’s requirements - costs more and has less efficiency and maintainability. In this software development mode, it’s hard to meet the requirements of software development in large scale customization environment. The purpose of software production for mass customization is to produce and maintain a family of software products with similar functions, figure out both their commonalities and variability and manage these features [1]. It represents the trend of software factory’s evolution. Software product line is an effective way to implement software production for mass customization. It’s a set of software systems with common controllable features. The core idea of software product line engineering is to develop a reusable infrastructure that supports the software development of a family of products [2]. A software product line typically consists of a product line architecture, a set of components and a set of products [3]. The characteristics of software development applying software product line principals are to maintain the common software assets and reuse them during the development process, such as domain model, software Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1115–1122, 2007. © Springer-Verlag Berlin Heidelberg 2007

1116

Y. Li et al.

architecture, process model, components, etc. Each product derives its architecture from the product line architecture, instantiates and configures a subset of the product line components and usually contains some product-specific code. Instantiated products constitute a family of software products in domain. Feature modeling is the mainstream of domain analysis for the software product line. Its main purpose is to identify all commonalities and variability in software product line. The outputs of feature modeling are all potential products of product line [4]. FORM [5] is a famous development method based on feature. Difference between domain products and family products shows the variability of software product line [2]. Variability point model [6, 7] models the variability of software product line through four ways. The complex dependency relationships among variability points are presented in first order expression [8]. From the viewpoint of software configuration management, the variability management of software product line can be divided into nine sub modules according to two dimensions [9]. By analyzing the deficiency of current feature modeling and its description language, this paper proposes an expanded feature modeling of software product line – feature map. It perfects feature dependency description and restriction expression, supports quick navigation to feature resource artifacts of software product line in distributed collaborative development environment. Its meta-model is also presented.

2 Feature Map Feature is the first-order entity in domain. It shows some capabilities or specialties owned by systems. It’s the only determinate abstract in the domain and can be understood simultaneously by domain experts, users and developers. To a certain extent, feature is a kind of expression to ontology knowledge of application domain. 2.1 Deficiency of Feature Model Feature modeling is to identify the commonalities and variability of all products in a software product line via analysis to domain features and their relationship. Domain reference architecture can be built according to feature model. The constituent units of the architecture can be bound with related component entities. However, existing feature model and its description technique have some deficiency. Firstly, each domain may have its own feature mutual operation relation due to its variety. It has indetermination. Although existing feature models sum up and analyze the usual feature relation, they can not wholly describe all domain related feature dependency relation. Secondly, existing feature model trends to build feature model by aiming at domain systems’ function. This forms the functional features. However, it seldom considers the non-functional domain features like performance, cost, and throughput etc. Also it lacks effective description and expression measure. Thirdly, domain feature analysis runs through all phases of software development lifecycle. It refers to a lot of resource entities like requirement specification, design model and component entities etc. Existing feature models only discuss production of software product from the viewpoint of feature selection. They ignore the problem of feature instantiation including the selection and locating of domain feature related resource entities. Fourthly, there may exist more than one component entity that implements

Software Product Line Oriented Feature Map

1117

the functions presented by a certain feature for choices. Existing feature models ignore the variability brought by feature implementation scheme. Thus it can be seen that it is necessary to expand existing feature models to perfect the modeling and description ability for feature dependency relationship, nonfunctional feature constraint, feature resource navigation and variability of domain. 2.2 Definition of Feature Map This paper proposes the concept of feature map. It supports feature dependency relationship and restriction expression and provides the capability of locating and navigating resource entities to implement feature selecting according to specified requirement, locate and assemble resource entities quickly and generate software product family that can satisfy dependency relationship and restriction conditions. A feature map can be defined as a 5 elements set. FM = (F, A, C, R, λA, λC, λR), among them, - F is the feature set of feature map; - A is the feature association set of feature map; - C is the feature constraint expression set of feature map; - R is the feature resource entity set of feature map; - λA denotes a mapping from F to the set P(A), i.e. λA: F→P(A). P(A) represents the set of all the subsets of A. λA meets the following conditions:

∀a ∈ A, ∃F ' ⊂ F ,| F ' |≥ 2 and ∀f ∈ F ' , λ A ( f ) = a This means that an arbitrary feature can have multiple dependency relationships with other features. Meanwhile, each feature association involves at least two features. - λC denotes a mapping from F to the set P(C), i.e. λC: F→P(C). P(C) represents the set of all the subsets of C. λC meets the following conditions: ∀c ∈ C , ∃F ' ⊂ F ,| F ' |≥ 1 and ∀f ∈ F ' , λ C ( f ) = c That is to say, for an arbitrary feature, it can be restricted by multiple constraint expressions; while each feature constraint can be specified to either a certain feature, or a set of features. - λR denotes a mapping from F to the set P(R), i.e. λR: F→P(R). P(R) represents the set of all the subsets of R. λR meets the following conditions: ∀f ≠ f ' ∈ F , λ R ( f ) ∩ λ R ( f ') = ∅ and ∪λ R ( f ) = R That is to say, each feature owns its resource entities. Thus it can be concluded that the concept of feature map consists of two parts. On the one hand, feature map expands existing feature models to construct its infrastructure and foundation via perfecting feature dependency relationship definition of existing feature models and aggrandizing feature constraint expression to enhance the feature configuration relationship. On the other hand, feature map builds its superstructure via introducing the resource entities of features and providing the capability to rapidly navigate and locate them. With these two hands combined tightly, by the means of selecting features to construct the reusable infrastructure, the

1118

Y. Li et al.

component entities of feature implementation are rapidly located and assembled to produce a family of software products meeting certain dependencies and constraints. 2.3 Meta-model of Feature Map

Features together with their dependency relationship, constrain expression and resource entities are abstracted as basic elements of meta-model. Corresponding with the web ontology language OWL, modeling elements of meta-model can be divided into ontology class element, object property element, data property element and data type element. Among them, ontology class element represents the semantic principal; object property element represents the association relationship among ontology class elements as the format of object property of ontology class elements, both its domain and range are ontology class elements; data property element represents the nonfunctional characteristics of ontology class element, its domain is ontology class element while its range is data type element.

Fig. 1. The Meta Model of Feature Map Based on Ontology

The meta model of feature map based on ontology is described as figure 1, Feature, FeatureBind, Association, Constraint and Resource etc. are defined as ontology classes; while restrictsObject, hasResource, playedBy and hasRole etc. are defined as ontology object properties to establish the relation network of semantic principal; name, param, and location etc. are defined as data properties to describe the feature properties of semantic principal. The meanings of the main meta-model elements are described as following:

Software Product Line Oriented Feature Map

1119

Feature: ontology expression of feature definition in feature map, it’s common or variable system characteristic that can be observed externally. Feature ontology instance is identified by a unique global name. FeatureBind: ontology class of feature binding, it associates the binding mode and binding time through bindMode and bindTime object properties respectively. Mode: binding mode of feature, including mandatory, optional, or, alternative and exclude etc. modes. Classified from the viewpoint of if this binding mode is affected by that of other features, mandatory and optional are unary binding modes while or, alternative and exclude etc. are multiple binding modes. However, if it’s classified from the viewpoint of the variability of features, only features indicated by mandatory are the common indispensables while the ones indicated by others are optional features rested with the specific software products. Time: binding time of feature, it only makes sense to the variable features that are indicated by optional, or, alternative and exclude etc. Its value can be design-time, compile-time, implement-time, assemble-time, load-time, instantiate-time, runtime etc. Resource: expression of feature resource. It marks the software product development phase producing the resource via the association of belongsTo object property and Phase ontology class. It also indicates the type of entity object quoted by the resource via the association of type object property and ResourceType ontology class. Resource type is decided by the phase of software product development. Entities quoted by the resource may locate on any places in the distributed network environment, and can be navigated by URI through location object property. Phase: stages of software product development. It includes requirement, design, implementation, test and maintenance etc. Although software product line engineering based on feature modeling is macroscopically similar with the traditional software engineering which is oriented to single software product development in the aspect of defining the phases of software development, they are dramatically different in the aspect of concrete actualizing approach and detail in each phase [10]. ResourceType: it can be requirements analysis document, model / flow design or component artifacts etc. This depends on the phase of software product development during which this resource is produced. Constraint: the non-functional restrictions on feature. Constraint expression consists of a set of parameters, operators and variables. Constraint builds association with Feature ontology class through restrictObject object property and confirms the restricted object. Constraint can be defined to aim at property set of a single feature. It also can include multiple features as restricted objects and build feature constraint relationship under the general restriction. Association: relationships between features. It associates the AssociationType ontology class through the type object property to confirm relation type. It also associates Role ontology class through the hasRole object property to make certain the objects referred by association. It is built based on at least two associated objects. AssociationType: type of association, including composed-of, implemented-by, require, generalization/specialization and activate etc. Association has orientations, among which, composed-of, generalization/specialization and implemented-by belong to structural association; require and activate belong to reference association.

1120

Y. Li et al.

Role: the object referred by association. It associates Feature ontology class through playedBy object property to make certain the real feature that assumes the role. It also associates RoleType ontology class via type object property to indicate the deserved role type. Assigning of role type determines the orientation of association. RoleType: type of role. Its real range is decided by the type of association accompanying with the role. The hierarchy of feature map is built by relationships like composed-of, generalization/specialization, implemented-by etc. among features. Common features are represented by setting binding mode to mandatory while variable features are established by marking the binding mode as optional, or, alternative or exclude etc. On the one hand, dependency and mutual operation among features are expressed by associations like implemented-by, require and activate etc. Moreover, the orientations of associations are determined by the role that feature takes within association. On the other hand, constraint expressions are built on the properties of a single feature or the properties set of feature group. All kinds of resource entities related to features in each development phase are navigated in network environment by location. Through this way, structural association, dependency association and constraint conditions among features are completely established. Meanwhile, by adding instances of AssociationType, RoleType and ResoureType, meta-model can describe the new associations and locate the new resource entities. Thus expansibility is available. The variability of feature map is represented in several aspects. Firstly, as far as binding mode and binding time are concerned, the former directly determines whether the feature is selected or not, while the latter determines the occasion when the optional features are instantiated. Secondly, the relations among features like require and activate etc. determine if the other features that have dependency association or mutual operation association with the present feature will be selected or not. Thirdly, constraint expression determines the quantification constraint on the properties set of a single feature or feature group, and furthermore, it will affect the selection of component entities for feature implementation. Fourthly, on the base of the navigation and locating of resource entities, software products instantiated by selecting resource entities with same functions but different implementation plan will have different non-functional characteristics like performance and quality of service etc.

3 Case Study Figure 2 shows the feature map of mobile telephone software product line and the mapping to its meta-model. Mobile telephone software product line is composed of some functional features like password protection, game, telephone directory and browser etc. Among them, password protection and browser are optional features. Meanwhile, multiple games can be the choice, but to some limitation, such as a small memory capacity, G3 and G4 can only be chosen one arbitrarily. In order to be in operation, the length of password should be set to 6, the length of list in the telephone directory should be no more than 250, and the required memory of embedded browser should be less than 2M. In the process of feature analysis, each function feature has related requirements specification, design model and implementation component. Some functional features, for example, G2, even have various implementation schemes.

Software Product Line Oriented Feature Map

1121

Functional features like password protection, game, telephone book and browser etc are modeling as Feature ontology; the selection of feature is mandatory or optional is modeling as Mode ontology; max length of password, volume of telephone book and memory consumed by browser etc. are modeling as Constraint ontology; hierarchy structure of features and the mutually exclusive relationship between G3 and G4 etc. are modeling as Association ontology; requirements document, design models and component entities are modeling as ResourceType ontology; all lifecycle phases of software development are modeling as Phase ontology. The whole infrastructure of feature map is constructed by the associations among ontology via object properties, while the superstructure of feature map is constructed by modeling the reference of resource as location property to navigate and locate the resource entities.

Fig. 2. Feature Map and Its Meta-model of Mobile Telephone Software Product Line

4 Conclusion The core idea of software product line engineering is to develop a reusable infrastructure that supports the development of a family of software products. It’s an efficient way to implement mass customized software production. Feature modeling is the mainstream of domain analysis of software product line. It identifies commonalities and variability of the products of a product line in terms of features to provide an acknowledged abstract to various stakeholders. Uncertainty of variable features determines the variability of software product line. Existing feature models and their description can not entirely support the diversity of feature dependencies in

1122

Y. Li et al.

different domains. They do not support modeling and description of constraint expression and can not navigate and locate the resources in network environment. Moreover, their variability analysis did not consider the alternative of component entities which implement the features. In this paper, the concept of feature map is proposed to perfect feature model. Ontology is introduced as the representation basis for the meta-model of feature map. Feature map supports customized feature dependencies and constraint expressions, provides the capability to navigate and locate the resource entities of features. Then by the means of selecting features to construct the reusable infrastructure, the components of feature implementation are rapidly located and assembled to produce a family of software products meeting certain dependencies and constraints. The further work is to refine the feature map during studies and practices, including how to define and describe its related action characters and state transfer etc.

References 1. Charles W. Krueger. “Software Mass Customization”. BigLever Software, Inc. (2001) 2. Michel Jaring, Jan Bosch. “Representing Variability in Software Product Lines: A Case Study”. Proceedings of the 2th International Conference on Software Product Lines (SPLC’02), Springer Verlag LNCS 2379 (2002) 15–36 3. J. Bosch. “Design & Use of Software Architectures - Adopting and Evolving a ProductLine Approach”. Addison-Wesley (2000) 4. David Benavides, Pablo Trinidad, Antonio Ruiz-Cortes. “Automated Reasoning on Feature Models”. Proceedings of the 17th International Conference on Advanced Information Systems Engineering (CAiSE’05), Springer Verlag LNCS 3520 (2005) 491–503 5. Kang KC, Kim S, Lee J, Kim K, Shin E, Huh M. “FORM: A Feature-Oriented Reuse Method with Domain-Specific Reference Architectures”. Annals of Software Engineering (1998) 143–168 6. Jan Bosch, Gert Florijn, Danny Greefhorst. “Variability Issues in Software Product Lines”. Proceedings of the 4th International Workshop on Software Product Family Engineering (PFE’02), Springer Verlag LNCS 2290 (2002) 13–21 7. Diana L. Webber, Hassan Gomaa. “Modeling Variability in Software Product Lines with The Variant Point Model”. Elsevier (2003) 8. Macro Sinnema, Sybren Deelstra, Jos Nijhuis, Jan Bosch. “COVAMOF: A Framework for Modeling Variability in Software Product Families”. Proceedings of the 3th International Conference on Software Product Lines (SPLC’04), Springer Verlag LNCS 3154 (2004) 197–213 9. Charles W. Krueger. “Variation Management for Software Production Lines”. Proceedings of the 2th International Conference on Software Product Lines (SPLC’02), Springer Verlag LNCS 2379 (2002) 37–48 10. Kyo C. Kang, Jaejoon Lee, Patrick Donohoe. “Feature-Oriented Product Line Engineering”. IEEE Software, Volume 19, Issue 4, July-Aug (2002) 58–65

Design and Development of Software Configuration Management Tool to Support Process Performance Monitoring and Analysis Alan Cline1, Eun-Pyo Lee2, and Byong-Gul Lee2 1

Ohio State University, Department of Computer Science and Engineering, Columbus, Ohio, USA [email protected] 2 Seoul Women’s University, Department of Computer Science, Seoul, Korea [email protected], [email protected]

Abstract. Most SCM tools underestimate the potential ability of monitoring and reporting the performance of various software process activities, and delegate implementation of such capabilities to other CASE tools. This paper discusses how SCM tool can be extended and implemented to provide valuable SCM information (e.g., metric data) in monitoring the performance of various process areas. With the extended SCM tool capability, stakeholders can measure, analyze, and report the performance of process activities even without using the expensive CASE tools. Keywords: Software Configuration Management, Process Metric.

1 Introduction Software Configuration Management (SCM) is a key discipline for development and maintenance of large and complex software systems [1], [2]. Many researches and studies show that SCM is the most basic management activity for establishing and maintaining the integrity of software products produced throughout the software life cycle. The activities of SCM include identifying configuration items/units, controlling changes, maintaining the integrity and the traceability of the configuration item, and auditing and reporting of configuration management status and result. The existing configuration management tools support some or the combination of these activities with the help of functions including change control, version control, work space management, and build/release control [3]. However, most SCM tools underestimate the benefits of using the SCM metric capability in monitoring and measuring of other process areas and have their implementation depend on other CASE tools such as project management tool or spreadsheet software [4]. We believe that the SCM tool capability can be extend to provide some valuable services and information for monitoring and measuring the performance of various process activities, such as project management, requirement management, or software quality assurance. For example, to monitor the stability of requirements (SR) during Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1123–1130, 2007. © Springer-Verlag Berlin Heidelberg 2007

1124

A. Cline, E.-P. Lee, and B.-G. Lee

requirement management, some of the primitive metric data from the change control function can be utilized as follows:

∑i Initial Requirementsi∙ Ti) .

SR = NCR / (

(1)

where NCR represents the total number of change request on a requirement and Ti is the time interval between the initial check-in and the final release of a requirement. Current SCM tools rarely provide the mechanism for incorporating the management and utilization of SCM metrics into other process areas. They usually define only a few set of primitive SCM metrics and let the thirsty user utilize them on a need basis. Besides, the current tools have no way of monitoring and measuring the performance for a newly defined process or out of the defined scope. For example, a user might be interested in measuring the effectiveness of the training plan prepared for an organization or a senior manager might want to create a new process role with a new responsibility and to see how the jobs get done. In this situation, the user is prone to purchase a tool fit to that purpose. This paper describes the design and development of SCM tool which can provide the measurement, analysis, and reporting of the process performance in the broader process areas. The remainder of this paper is organized as follows. Section 2 gives the related works by comparing the exiting SCM tool capability based on metric utilization. Section 3 and 4 describe the design and usage of our SCM tool in supporting of metric capability. Finally, the conclusion and future works appears in Section 5.

2 Current Configuration Management Tools The features of the current SCM tools are limited to the support of only a few areas such as requirement management or project tracking and oversight. Covering the entire software process activities then costs for purchasing the expert tools. IBM’s ClearQuest provides a workflow management capability and enforce the process monitoring. However, for this service to be useful, it needs to integrate with a tool, Portfolio Manager, for the profiling of process information from the extern [5]. Telelogic’s Synergy, integrated with Dashboard, intends to support a decision making of project manager by automating data collection, analysis, and reporting of measurement data. However, the tool only focuses on automating the requirement management process data [6]. Borland’s StarTeam tends to offer a comprehensive solution that includes integrated requirements management, change management, defect tracking, and project and task management [7]. However, its primary usage is limited to the SCM only and depends on external CASE tools to utilize the SCM information in other process activities. Continuus/CM toolset also cannot be used for the complicated process environment due to its lack of process related information [8], [9]. The study of [10] supports our view by stating that the current SCM tools have: 1) No support with the planning function; 2) No sufficient support with process related functions; 3) No support with the report and audit mechanisms; 4) No support with measurement.

Design and Development of SCM Tool

1125

3 Development of Configuration Management Process Tool To lessen the problem of existing SCM tools, we propose a Configuration Management Process Tool (CMPT) by implementing features such that the software project team can monitor and analyze their process performance by utilizing the SCM metrics for each process area. CMPT is implemented in JAVA and runs on a singleuser workstation connected to a remote shared CVS server. Configuration Management Process Tool

Upper Mgmt

Trainers

customized (reconfigured) courseware or manuals

change metrics*

change requests and requirement elements (e.g. use cases, object classes)

training docs to be controlled

Requirements Manager

nbr of defects, number of changes, change owner and status

change metrics*, repair schedule/

Project Manager status, version change summary project docs to be controlled, test and dev schedule

Configuration Management Process Tool

defects repaired, test schedule variance

version requests and defects repaired

QA Manager

change metrics*, repair schedule/ status, version change summary

defects found, time repaired

Test Manager, Tester

defect categories, acceptable nbr of defects

code files and merged versions

config ID per info set; readiness checklist project docs to be controlled; access perms; CM guidelines & processes

Configuration Manager

Project Leader, Developer *change metrics = defect MTTR, product stability (MTBF), schedule variance

Fig. 1. Context Diagram of the CMPT

Figure 1 illustrates the scope of CMPT and data flow that comes in and out of a system. The requirements manager, for instance, inputs the requirement artifacts for controlling and monitoring of use cases and object model classes. CMPT can deliver to requirement manager the status reports containing the number of defects, the number of changes and change requests, the change owner and date, and the status of change requests. 3.1 Change Control Process Among the many features of CMPT, the change control provides a base playground for monitoring and collecting the various process data. The state flow diagram in Figure 2 represents the invariants, pre-conditions, and post-conditions of change control process. The change control flows are divided into two modes, free text module and executable module. The free text module elements (e.g., Proposal, Charter, Requirement Specification, Design, Test Plan, Training Plan, and etc.) can have several states: REQUEST PROPOSED, APPROVED, COMPLETE, and some intermediate states. Executable modules (e.g., Change Requests, Defects, Code, Test Cases, and Use

1126

A. Cline, E.-P. Lee, and B.-G. Lee

Cases) go through seven states: REQUEST PROPOSED, APPROVED, ASSIGNED, IMPLEMENTED, TEST READY, TESTED and COMPLETE states. CMPT is designed to monitor and collect the status information for each of these states to provide more accurate and affluent metrics for other process areas. Check-in Incident (failed Test Case) Check-in CR or PLM from new req'mts

CR or FTM Proposed Check-in CR (Add, Delete,Up date) from testing

Incidents that can be ignored

CCB declares Incident as Defect

Defect Proposed

CCB approves Item

CCB does not approve

Unapprove d (archives RPO)

Defect Approved

PM re-assigns to Developer

CR or FTM Approved TM assigns Tester

Test Assigned

PM assigns Developer

Dev Assigned

Test Implem

Defect or CR Proposed for failed test case

Author Assigned

Developer checks in code modules

Tester checks in test cases

Tester confirms NOT ALL tests pass

PM assigns Author

Author checks in planning module

Dev Implem

Developer checks in code modules

FTM Written

Tester checks in test cases

Test Ready

GOTO TEST OR DEV ASSIGNED (System notifies Tester or Developer)

Tester confirms ALL tests pass

Configuration Items Job Flow all inputs required either/or input

Request Tested

CCB does not approve revision

CCB approves revision

Resubmitted

CCB approves revision

Complete

Fig. 2. Flow Items of the SCM Change Process

3.2 User Profiles and Transaction Permissions CMPT facilitates with an access control mechanism for it can be utilized by different participants in various process areas. Table 1 shows each user’s transaction CMPT facilitates. The Project Manager will input project documents to be controlled, and the test and development schedule. The Project Manager then want change metrics, repair schedules, development and change statuses, and version change summaries for the product from CMPT. The Requirements Manager may input requirement elements for control and monitoring and want CMPT to deliver reports containing the number of defects, number of changes and change requests, the change owner and date, and status of change requests and repairs. The Test Manager and Tester may want CMPT to generate defect repair reports (change metrics) and test schedule variance. Both Developers and Project Leaders may want CMPT to produce merged product versions, code files, traceable version links, and change reports. CMPT can provide Configuration Manager with version or configuration ID and configuration status

Design and Development of SCM Tool

1127

profiles to help coordinating the various SCM activities. CMPT also provides the audit mechanism to enforce the CM guidelines and policies as planned. For the QA Manager, CMPT provides various QA metrics for each process area under the control of SCM. Table 1. Example of Each User’s Transaction Permissions User Trans. Get metric report Check out item Change request Approve change request

PM

RM

QM

CM

TM

Proj. Leader/ Developer

Tester





















































Trainer

General Users

√ √

3.3 Process Metric The key to the process metrics analysis is to analyze the actual effort, schedule, deviation of cost and plan, and defects during the project [11], [12]. Table 2 shows a sample of such metrics provided in [11]. In CMPT, these metrics can be calculated from Cartesian product of the fine grained SCM events (e.g., check-in, release, checkout, change request, change, change completion, and etc.) and scale measurements (e.g., number of events, frequency of events, time interval, average time, and etc.). Table 2. SCM Process Metrics Metric Schedule variance Effort variance Size variance Change stability Change density Residual change density Change distribution in each development phase

Equation (Actual duration-Planned duration)/Planned duration (Actual effort-Planned effort)/Planned effort (Actual size-Planned size)/Planned size Number of change request/Total number of baseline items Number of changes for each baseline/Time span between check in and out Number of changes completed / Number of changes requested Number of changes in each phase/Total number of changes

4 Work Scenarios of CMPT This section describes how CMPT can be utilized in producing metrics for monitoring other process activities. Figure 3 shows that CMPT facilitates a set of transactions and access permission associated with user’s role. A user can define and customize various roles and transactions according to their process condition and environment.

1128

A. Cline, E.-P. Lee, and B.-G. Lee

Fig. 3. Setting SCM Permissions

Figure 4 and 5 show how user can retrieve or store a configuration item. For checking out a configuration item (Figure 4), CMPT facilitates an exclusive locking and shows a lock icon for the items that already have been checked out. For checking in (Figure 5), a user can check in either free-text documents or executable elements. In CMPT, both the check-in and the check-out event are combined with the scale measurement (e.g., time, frequency, number and etc.) to produce the finer grained SCM metrics, such as number of check-in or time of check-out.

Fig. 4. Check-out of Configuration Item

Fig. 5. Check-in of Configuration Item

Any project team member can propose a change request for adding new item(s), and replacing and deleting any existing item(s) with reasons and expected completion time (Figure 6). All element types of new items are pre-defined by the Configuration Manager. Once a Change Request (CR) is proposed, the CCB reviews the CR or defect for approval or disapproval. If CR or defect is approved, the Project Manager and Test Manager assign the request to developers and testers for implementation.

Design and Development of SCM Tool

1129

Fig. 6. Create a Change Request Proposal

User can get a metrics report for a selected configuration item with a graph (Figure 7). The graph view can provide a summary of various process performances. Users can select a report type from predefined types such as configuration status, history, tracking, or release report. The scale can be chosen from number of events, frequency of events, time interval, and average time, and the metrics can be chosen from check-in, release, check-out, change request, change, and change completion of configuration items. Figure 7 indicates the frequency of change request for each use case. In this case, use case no.2 shows the highest frequency of change requests.

Fig. 7. Get Metrics Report

5 Conclusion and Future Work SCM’s capability has to extend to provide some valuable services to monitor other process or management activities, such as project management or software quality assurance. Current SCM tools rarely provide such a monitoring capability nor provide fine-grained data useful enough. This paper described the design and development of

1130

A. Cline, E.-P. Lee, and B.-G. Lee

an SCM tool which can provide the measurement, analysis and reporting capability for monitoring other process performance without using expensive CASE tools. The proposed CMPT tool can: 1. define and customize the access/role control and associated transaction selection, 2. define and customize the process work flow, and 3. utilize the work flow status information and metrics to provide process performance information. Currently, the CMPT’s process customization capability only applies to change control. For future work, the customization scheme should be extended and enhanced to adopt the various process characteristics and project environments. More studies should also be focused on the reporting, including graphing, extraction, and translation of process metric data. Acknowledgments. This research was supported by the MIC (Ministry of Information and Communication), Korea, under the ITRC (Information Technology Research Center) support program supervised by the IITA (Institute of Information Technology Advancement) (IITA-2006-(C1090-0603-0032)).

References 1. S. Dart: “Concepts in Configuration Management Systems,” Proc. Third Int’l Software Configuration Management Workshop (1991) 1-18 2. D. Whitgift: Methods and Tools for Software Configuration Management, John Wiley and Sons (1991) 3. A. Midha: “Software Configuration Management for the 21st Century,” TR 2(1), Bell Labs Technical (1997) 4. Alexis Leon: A guide to software configuration management, Artech House (2000) 5. IBM Rational: ClearCase, http://www-306.ibm.com/software/awdtools/changemgmt/ (2006) 6. Peter Baxter and Dominic Tavassoli: “Management Dashboards and Requirement Management,” White Paper, Telelogic (2006) 7. Borland: Starteam, http://www.borland.com/us/products/starteam/index.html. 8. Continuus Software/CM: Introduction to Continuus/CM, Continuus Software Corporation (1999) 9. Merant :PVCS, http://www.merant.com/products/pvcs/ 10. Fei Wang, Aihua Ren: “A Configuration Management Supporting System Based on CMMI,” Proceedings of the First International Multi-Symposiums on Computer and Computational Sciences, IEEE CS (2006) 11. R. Xu, Y. Xue, P. Nie, Y. Zhang, D. Li: “Research on CMMI-based Software Process Metrics,” Proceedings of the First International Multi-Symposiums on Computer and Computational Sciences, IEEE CS (2006) 12. F. Chirinos and J. Boegh: “Characterizing a data model for software measurement,” Journal of Systems and Software, v. (74), Issue 2 (2005) 207-226 T

Data Dependency Based Recovery Approaches in Survival Database Systems Jiping Zheng1,2, Xiaolin Qin1,2, and Jin Sun1 1

College of Information Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 210016, China 2 Institute of Information Security, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 210016, China {zhengjiping, qinxcs, sunjinly}@nuaa.edu.cn

Abstract. Recovering from malicious attacks in survival database systems is vital in mission-critical information systems. Traditional rollback and reexecute techniques are too time-consuming and can not be applied in survival environments. In this paper, two efficient approaches - transaction dependency based and data dependency based are proposed. Comparing to transaction dependency based approach, data dependency recovery approaches need not undo innocent operations in malicious and affected transactions even some benign blind writes on bad data item speed up recovery process.

1 Introduction Database security concerns confidentiality, integrity and availability of data stored in a database [1]. Traditional security mechanisms focus on protection, especially confidentiality of the data. But in some mission-critical systems, such as credit card billing, air traffic control, logistics management, inventory tracking and online stock trading, emphases are put on how to survive under successful attacks [2]. And these systems need to provide limited service at all time and focus on database integrity and availability. Despite of existing protection mechanisms, various kinds of attacks and authorized users to exceed their legitimate access or abuse the system make above systems more vulnerable. So intrusion detection (ID) was introduced. There are two main techniques, including statistical profiling and signature identification, which can supplement protection of database systems by rejecting the future access of detected malicious attackers and by providing useful hints on how to strengthen the defense. However, there are several inherent limitations about ID [3]: (a) Intrusion detection makes the system attack-aware but not attack-resistant, that is, intrusion detection itself cannot maintain the integrity and availability of the database in face of attacks. (b) Achieving accurate detection is usually difficult or expensive. The false alarm rate is high in many cases. (c) The average detection latency in many cases is too long to effectively confine the damage. Some malicious behaviors can not be avoided in DBMS. So effective and efficient recovery approaches must be adopted after the Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1131–1138, 2007. © Springer-Verlag Berlin Heidelberg 2007

1132

J. Zheng, X. Qin, and J. Sun

detection of malicious attacks. The rest of this paper is organized as follows. A summery of related work in this area is included in Section 2. In Section 3, recovery approaches in traditional and survival DBMS are given. Section 3.1 describes database and transaction theoretical model. Transaction logging recovery method is put forward in Section 3.2. Then data dependency approaches without/with blind writes are emphasized in Section 3.3 and Section 3.4 respectively. Performance analysis is put forward in Section 4. Section 5 concludes the paper.

2 Related Work The traditional even simplest method for recovering a database to a consistent state is rollback followed by re-execution of the malicious transactions and ones which are dependent upon them. This method, while effective, necessitates an undue amount of work on the part of the database administrator, and requires a knowledge of which transaction was inappropriate one. And, some benign and innocent transactions need to be re-executed. In general, this is a relatively poor option (not efficient) and inadequate for the purposes of most database installations. In order to overcome the limitations of this simple rollback model, researchers have investigated various other methods for recovering a database to a consistent state. In general, there are two basic forms of post-intrusion recovery methods [4]: transaction based and data dependency based. The difference lies in whether the system recovers modifications to the logs to organize the data modified by interdependencies and associations. Transaction based recovery methods [5-7], mostly referred transaction logging ones, rely on the ability of an ancillary structure to re-execute committed transactions that have been both committed since the execution of the malicious transactions and affected by those transactions. First ODAM [8] then ITDB [9] and Phoenix [10] are survival DBMSs developed by Peng Liu et al. and Tzi-cher Chiueh respectively. These prototypes are implemented on top of a COTS (Commercial-Off-The-Shelf) DBMS, e.g. Oracle, PostgreSQL. In these systems, database updates are logged in terms of SQL-based transactions. ODAM and ITDB identify inter-transaction dependencies at the repair time by analyzing the SQL log and only undo malicious transactions and ones affected by them while Phoenix maintains a run-time intertransaction dependency graph with selective transaction undo. However, these systems rely on the ability of the recovery system to correctly determine the transactions which need to be redone. Data dependency based recovery methods [11-14] suggest to undo and redo only affected operations rather than undoing all operations of affected transactions and then re-executed them. Panda and Tripathy [12] [13] divide transaction log file into clusters to identify affected items for further recovery. Nevertheless, they require that log must be accessed starting from the malicious transaction till the end in order to perform damage assessment and recovery.

Data Dependency Based Recovery Approaches in Survival Database Systems

1133

3 Recovery Approaches in Survival Database Systems As mentioned methods above, our work is also based on the assumption that the attacking transaction has already been detected by intrusion detection techniques. So, given an attacking transaction, our goal is to determine the affected ones quickly, stops new and executing transactions from accessing affected data, and then carries out recovering process. In our methods, we suppose that the scheduler produces a strict serializable history, and the log is not modifiable by users. As the transactions get executed, the log grows with time and is never purged. Also, the log is stored in the secondary storage, so every access to it requires a disk I/O. 3.1 Database and Transaction Theoretical Model To explain our recovery approaches, first we provide database and transaction theoretical model as below [15]: Definition 1. A database system is a set of data objects, denoted as DB={x1, x2, …, xn}. Definition 2. A transaction Ti is a partial order with ordering relation